00:00:00.001 Started by upstream project "autotest-per-patch" build number 120487 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.044 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.045 The recommended git tool is: git 00:00:00.045 using credential 00000000-0000-0000-0000-000000000002 00:00:00.047 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.066 Fetching changes from the remote Git repository 00:00:00.067 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.095 Using shallow fetch with depth 1 00:00:00.095 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.095 > git --version # timeout=10 00:00:00.125 > git --version # 'git version 2.39.2' 00:00:00.125 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.126 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.126 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.337 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.347 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.358 Checking out Revision 27f13fcb4eea6a447c9f3d131408acb483141c09 (FETCH_HEAD) 00:00:03.358 > git config core.sparsecheckout # timeout=10 00:00:03.367 > git read-tree -mu HEAD # timeout=10 00:00:03.382 > git checkout -f 27f13fcb4eea6a447c9f3d131408acb483141c09 # timeout=5 00:00:03.400 Commit message: "docker/pdu_power: add PDU APC-C14 and APC-C18" 00:00:03.400 > git rev-list --no-walk 27f13fcb4eea6a447c9f3d131408acb483141c09 # timeout=10 00:00:03.514 [Pipeline] Start of Pipeline 00:00:03.526 [Pipeline] library 00:00:03.527 Loading library shm_lib@master 00:00:03.527 Library shm_lib@master is cached. Copying from home. 00:00:03.545 [Pipeline] node 00:00:03.551 Running on VM-host-SM17 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3 00:00:03.556 [Pipeline] { 00:00:03.568 [Pipeline] catchError 00:00:03.570 [Pipeline] { 00:00:03.585 [Pipeline] wrap 00:00:03.595 [Pipeline] { 00:00:03.603 [Pipeline] stage 00:00:03.605 [Pipeline] { (Prologue) 00:00:03.631 [Pipeline] echo 00:00:03.632 Node: VM-host-SM17 00:00:03.637 [Pipeline] cleanWs 00:00:03.645 [WS-CLEANUP] Deleting project workspace... 00:00:03.645 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.650 [WS-CLEANUP] done 00:00:03.839 [Pipeline] setCustomBuildProperty 00:00:03.912 [Pipeline] nodesByLabel 00:00:03.913 Found a total of 1 nodes with the 'sorcerer' label 00:00:03.928 [Pipeline] httpRequest 00:00:03.935 HttpMethod: GET 00:00:03.936 URL: http://10.211.164.101/packages/jbp_27f13fcb4eea6a447c9f3d131408acb483141c09.tar.gz 00:00:03.937 Sending request to url: http://10.211.164.101/packages/jbp_27f13fcb4eea6a447c9f3d131408acb483141c09.tar.gz 00:00:03.939 Response Code: HTTP/1.1 200 OK 00:00:03.939 Success: Status code 200 is in the accepted range: 200,404 00:00:03.940 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/jbp_27f13fcb4eea6a447c9f3d131408acb483141c09.tar.gz 00:00:04.206 [Pipeline] sh 00:00:04.495 + tar --no-same-owner -xf jbp_27f13fcb4eea6a447c9f3d131408acb483141c09.tar.gz 00:00:04.513 [Pipeline] httpRequest 00:00:04.517 HttpMethod: GET 00:00:04.518 URL: http://10.211.164.101/packages/spdk_480afb9a12473fc529dcfc0e401239bf7cd1ac08.tar.gz 00:00:04.518 Sending request to url: http://10.211.164.101/packages/spdk_480afb9a12473fc529dcfc0e401239bf7cd1ac08.tar.gz 00:00:04.520 Response Code: HTTP/1.1 200 OK 00:00:04.521 Success: Status code 200 is in the accepted range: 200,404 00:00:04.521 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/spdk_480afb9a12473fc529dcfc0e401239bf7cd1ac08.tar.gz 00:00:24.131 [Pipeline] sh 00:00:24.443 + tar --no-same-owner -xf spdk_480afb9a12473fc529dcfc0e401239bf7cd1ac08.tar.gz 00:00:27.742 [Pipeline] sh 00:00:28.023 + git -C spdk log --oneline -n5 00:00:28.023 480afb9a1 raid: remove base_bdev_lock 00:00:28.023 b01acb55d raid: fix some issues in raid_bdev_write_config_json() 00:00:28.023 0d5f01bd8 raid: examine other bdevs when starting from superblock 00:00:28.023 79a744ed0 raid: factor out a function to get a raid bdev by uuid 00:00:28.023 b242249a1 raid: factor out examine code 00:00:28.042 [Pipeline] writeFile 00:00:28.057 [Pipeline] sh 00:00:28.338 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:28.350 [Pipeline] sh 00:00:28.630 + cat autorun-spdk.conf 00:00:28.630 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:28.630 SPDK_TEST_NVMF=1 00:00:28.630 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:28.630 SPDK_TEST_URING=1 00:00:28.630 SPDK_TEST_USDT=1 00:00:28.630 SPDK_RUN_UBSAN=1 00:00:28.630 NET_TYPE=virt 00:00:28.630 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:28.637 RUN_NIGHTLY=0 00:00:28.640 [Pipeline] } 00:00:28.657 [Pipeline] // stage 00:00:28.672 [Pipeline] stage 00:00:28.675 [Pipeline] { (Run VM) 00:00:28.688 [Pipeline] sh 00:00:28.966 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:28.966 + echo 'Start stage prepare_nvme.sh' 00:00:28.966 Start stage prepare_nvme.sh 00:00:28.966 + [[ -n 1 ]] 00:00:28.966 + disk_prefix=ex1 00:00:28.967 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3 ]] 00:00:28.967 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/autorun-spdk.conf ]] 00:00:28.967 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/autorun-spdk.conf 00:00:28.967 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:28.967 ++ SPDK_TEST_NVMF=1 00:00:28.967 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:28.967 ++ SPDK_TEST_URING=1 00:00:28.967 ++ SPDK_TEST_USDT=1 00:00:28.967 ++ SPDK_RUN_UBSAN=1 00:00:28.967 ++ NET_TYPE=virt 00:00:28.967 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:28.967 ++ RUN_NIGHTLY=0 00:00:28.967 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3 00:00:28.967 + nvme_files=() 00:00:28.967 + declare -A nvme_files 00:00:28.967 + backend_dir=/var/lib/libvirt/images/backends 00:00:28.967 + nvme_files['nvme.img']=5G 00:00:28.967 + nvme_files['nvme-cmb.img']=5G 00:00:28.967 + nvme_files['nvme-multi0.img']=4G 00:00:28.967 + nvme_files['nvme-multi1.img']=4G 00:00:28.967 + nvme_files['nvme-multi2.img']=4G 00:00:28.967 + nvme_files['nvme-openstack.img']=8G 00:00:28.967 + nvme_files['nvme-zns.img']=5G 00:00:28.967 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:28.967 + (( SPDK_TEST_FTL == 1 )) 00:00:28.967 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:28.967 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:28.967 + for nvme in "${!nvme_files[@]}" 00:00:28.967 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:00:28.967 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:28.967 + for nvme in "${!nvme_files[@]}" 00:00:28.967 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:00:28.967 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:28.967 + for nvme in "${!nvme_files[@]}" 00:00:28.967 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:00:28.967 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:28.967 + for nvme in "${!nvme_files[@]}" 00:00:28.967 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:00:28.967 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:28.967 + for nvme in "${!nvme_files[@]}" 00:00:28.967 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:00:28.967 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:28.967 + for nvme in "${!nvme_files[@]}" 00:00:28.967 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:00:28.967 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:28.967 + for nvme in "${!nvme_files[@]}" 00:00:28.967 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:00:29.902 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:29.902 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:00:29.902 + echo 'End stage prepare_nvme.sh' 00:00:29.902 End stage prepare_nvme.sh 00:00:29.915 [Pipeline] sh 00:00:30.196 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:30.196 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora38 00:00:30.196 00:00:30.196 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/spdk/scripts/vagrant 00:00:30.196 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/spdk 00:00:30.196 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3 00:00:30.196 HELP=0 00:00:30.196 DRY_RUN=0 00:00:30.196 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:00:30.196 NVME_DISKS_TYPE=nvme,nvme, 00:00:30.196 NVME_AUTO_CREATE=0 00:00:30.196 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:00:30.196 NVME_CMB=,, 00:00:30.196 NVME_PMR=,, 00:00:30.196 NVME_ZNS=,, 00:00:30.196 NVME_MS=,, 00:00:30.196 NVME_FDP=,, 00:00:30.196 SPDK_VAGRANT_DISTRO=fedora38 00:00:30.196 SPDK_VAGRANT_VMCPU=10 00:00:30.196 SPDK_VAGRANT_VMRAM=12288 00:00:30.196 SPDK_VAGRANT_PROVIDER=libvirt 00:00:30.196 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:30.196 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:30.196 SPDK_OPENSTACK_NETWORK=0 00:00:30.196 VAGRANT_PACKAGE_BOX=0 00:00:30.196 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/spdk/scripts/vagrant/Vagrantfile 00:00:30.196 FORCE_DISTRO=true 00:00:30.196 VAGRANT_BOX_VERSION= 00:00:30.196 EXTRA_VAGRANTFILES= 00:00:30.196 NIC_MODEL=e1000 00:00:30.196 00:00:30.196 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/fedora38-libvirt' 00:00:30.196 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3 00:00:33.480 Bringing machine 'default' up with 'libvirt' provider... 00:00:34.046 ==> default: Creating image (snapshot of base box volume). 00:00:34.304 ==> default: Creating domain with the following settings... 00:00:34.304 ==> default: -- Name: fedora38-38-1.6-1705279005-2131_default_1713367415_c03fb4cf9c451766da3e 00:00:34.304 ==> default: -- Domain type: kvm 00:00:34.304 ==> default: -- Cpus: 10 00:00:34.304 ==> default: -- Feature: acpi 00:00:34.304 ==> default: -- Feature: apic 00:00:34.304 ==> default: -- Feature: pae 00:00:34.304 ==> default: -- Memory: 12288M 00:00:34.304 ==> default: -- Memory Backing: hugepages: 00:00:34.304 ==> default: -- Management MAC: 00:00:34.304 ==> default: -- Loader: 00:00:34.304 ==> default: -- Nvram: 00:00:34.304 ==> default: -- Base box: spdk/fedora38 00:00:34.304 ==> default: -- Storage pool: default 00:00:34.304 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1705279005-2131_default_1713367415_c03fb4cf9c451766da3e.img (20G) 00:00:34.304 ==> default: -- Volume Cache: default 00:00:34.304 ==> default: -- Kernel: 00:00:34.304 ==> default: -- Initrd: 00:00:34.304 ==> default: -- Graphics Type: vnc 00:00:34.304 ==> default: -- Graphics Port: -1 00:00:34.304 ==> default: -- Graphics IP: 127.0.0.1 00:00:34.304 ==> default: -- Graphics Password: Not defined 00:00:34.304 ==> default: -- Video Type: cirrus 00:00:34.304 ==> default: -- Video VRAM: 9216 00:00:34.304 ==> default: -- Sound Type: 00:00:34.304 ==> default: -- Keymap: en-us 00:00:34.304 ==> default: -- TPM Path: 00:00:34.304 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:34.304 ==> default: -- Command line args: 00:00:34.304 ==> default: -> value=-device, 00:00:34.304 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:34.304 ==> default: -> value=-drive, 00:00:34.304 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:00:34.304 ==> default: -> value=-device, 00:00:34.304 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:34.304 ==> default: -> value=-device, 00:00:34.304 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:34.304 ==> default: -> value=-drive, 00:00:34.304 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:34.304 ==> default: -> value=-device, 00:00:34.304 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:34.304 ==> default: -> value=-drive, 00:00:34.304 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:34.304 ==> default: -> value=-device, 00:00:34.304 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:34.304 ==> default: -> value=-drive, 00:00:34.304 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:34.304 ==> default: -> value=-device, 00:00:34.304 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:34.563 ==> default: Creating shared folders metadata... 00:00:34.563 ==> default: Starting domain. 00:00:36.004 ==> default: Waiting for domain to get an IP address... 00:00:57.956 ==> default: Waiting for SSH to become available... 00:00:57.956 ==> default: Configuring and enabling network interfaces... 00:00:59.856 default: SSH address: 192.168.121.92:22 00:00:59.857 default: SSH username: vagrant 00:00:59.857 default: SSH auth method: private key 00:01:02.386 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:10.508 ==> default: Mounting SSHFS shared folder... 00:01:11.073 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:11.073 ==> default: Checking Mount.. 00:01:12.446 ==> default: Folder Successfully Mounted! 00:01:12.446 ==> default: Running provisioner: file... 00:01:13.014 default: ~/.gitconfig => .gitconfig 00:01:13.577 00:01:13.577 SUCCESS! 00:01:13.577 00:01:13.577 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/fedora38-libvirt and type "vagrant ssh" to use. 00:01:13.577 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:13.577 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/fedora38-libvirt" to destroy all trace of vm. 00:01:13.577 00:01:13.585 [Pipeline] } 00:01:13.603 [Pipeline] // stage 00:01:13.610 [Pipeline] dir 00:01:13.610 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/fedora38-libvirt 00:01:13.611 [Pipeline] { 00:01:13.625 [Pipeline] catchError 00:01:13.626 [Pipeline] { 00:01:13.639 [Pipeline] sh 00:01:13.915 + vagrant ssh-config --host vagrant 00:01:13.915 + sed -ne /^Host/,$p 00:01:13.915 + tee ssh_conf 00:01:18.100 Host vagrant 00:01:18.100 HostName 192.168.121.92 00:01:18.100 User vagrant 00:01:18.100 Port 22 00:01:18.100 UserKnownHostsFile /dev/null 00:01:18.100 StrictHostKeyChecking no 00:01:18.100 PasswordAuthentication no 00:01:18.100 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1705279005-2131/libvirt/fedora38 00:01:18.100 IdentitiesOnly yes 00:01:18.100 LogLevel FATAL 00:01:18.100 ForwardAgent yes 00:01:18.100 ForwardX11 yes 00:01:18.100 00:01:18.115 [Pipeline] withEnv 00:01:18.117 [Pipeline] { 00:01:18.133 [Pipeline] sh 00:01:18.412 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:18.412 source /etc/os-release 00:01:18.412 [[ -e /image.version ]] && img=$(< /image.version) 00:01:18.412 # Minimal, systemd-like check. 00:01:18.412 if [[ -e /.dockerenv ]]; then 00:01:18.412 # Clear garbage from the node's name: 00:01:18.412 # agt-er_autotest_547-896 -> autotest_547-896 00:01:18.412 # $HOSTNAME is the actual container id 00:01:18.412 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:18.412 if mountpoint -q /etc/hostname; then 00:01:18.412 # We can assume this is a mount from a host where container is running, 00:01:18.412 # so fetch its hostname to easily identify the target swarm worker. 00:01:18.412 container="$(< /etc/hostname) ($agent)" 00:01:18.412 else 00:01:18.412 # Fallback 00:01:18.412 container=$agent 00:01:18.412 fi 00:01:18.412 fi 00:01:18.412 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:18.412 00:01:18.680 [Pipeline] } 00:01:18.698 [Pipeline] // withEnv 00:01:18.705 [Pipeline] setCustomBuildProperty 00:01:18.718 [Pipeline] stage 00:01:18.720 [Pipeline] { (Tests) 00:01:18.737 [Pipeline] sh 00:01:19.021 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:19.035 [Pipeline] timeout 00:01:19.036 Timeout set to expire in 30 min 00:01:19.038 [Pipeline] { 00:01:19.055 [Pipeline] sh 00:01:19.336 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:19.903 HEAD is now at 480afb9a1 raid: remove base_bdev_lock 00:01:19.914 [Pipeline] sh 00:01:20.192 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:20.464 [Pipeline] sh 00:01:20.742 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:21.013 [Pipeline] sh 00:01:21.287 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:01:21.546 ++ readlink -f spdk_repo 00:01:21.546 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:21.546 + [[ -n /home/vagrant/spdk_repo ]] 00:01:21.546 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:21.546 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:21.546 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:21.546 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:21.546 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:21.546 + cd /home/vagrant/spdk_repo 00:01:21.546 + source /etc/os-release 00:01:21.546 ++ NAME='Fedora Linux' 00:01:21.546 ++ VERSION='38 (Cloud Edition)' 00:01:21.546 ++ ID=fedora 00:01:21.546 ++ VERSION_ID=38 00:01:21.546 ++ VERSION_CODENAME= 00:01:21.546 ++ PLATFORM_ID=platform:f38 00:01:21.546 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:21.546 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:21.546 ++ LOGO=fedora-logo-icon 00:01:21.546 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:21.546 ++ HOME_URL=https://fedoraproject.org/ 00:01:21.546 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:21.546 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:21.546 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:21.546 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:21.546 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:21.546 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:21.546 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:21.546 ++ SUPPORT_END=2024-05-14 00:01:21.546 ++ VARIANT='Cloud Edition' 00:01:21.546 ++ VARIANT_ID=cloud 00:01:21.546 + uname -a 00:01:21.546 Linux fedora38-cloud-1705279005-2131 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:21.546 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:22.113 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:22.113 Hugepages 00:01:22.113 node hugesize free / total 00:01:22.113 node0 1048576kB 0 / 0 00:01:22.113 node0 2048kB 0 / 0 00:01:22.113 00:01:22.113 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:22.113 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:22.113 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:22.113 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:01:22.113 + rm -f /tmp/spdk-ld-path 00:01:22.113 + source autorun-spdk.conf 00:01:22.113 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.113 ++ SPDK_TEST_NVMF=1 00:01:22.113 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:22.113 ++ SPDK_TEST_URING=1 00:01:22.113 ++ SPDK_TEST_USDT=1 00:01:22.113 ++ SPDK_RUN_UBSAN=1 00:01:22.113 ++ NET_TYPE=virt 00:01:22.113 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:22.113 ++ RUN_NIGHTLY=0 00:01:22.113 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:22.113 + [[ -n '' ]] 00:01:22.113 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:22.113 + for M in /var/spdk/build-*-manifest.txt 00:01:22.113 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:22.113 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:22.113 + for M in /var/spdk/build-*-manifest.txt 00:01:22.113 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:22.113 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:22.113 ++ uname 00:01:22.113 + [[ Linux == \L\i\n\u\x ]] 00:01:22.113 + sudo dmesg -T 00:01:22.113 + sudo dmesg --clear 00:01:22.113 + dmesg_pid=5102 00:01:22.113 + [[ Fedora Linux == FreeBSD ]] 00:01:22.113 + sudo dmesg -Tw 00:01:22.113 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:22.113 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:22.113 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:22.113 + [[ -x /usr/src/fio-static/fio ]] 00:01:22.113 + export FIO_BIN=/usr/src/fio-static/fio 00:01:22.113 + FIO_BIN=/usr/src/fio-static/fio 00:01:22.113 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:22.113 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:22.113 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:22.113 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:22.113 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:22.113 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:22.113 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:22.113 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:22.113 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:22.372 Test configuration: 00:01:22.372 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.372 SPDK_TEST_NVMF=1 00:01:22.372 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:22.372 SPDK_TEST_URING=1 00:01:22.372 SPDK_TEST_USDT=1 00:01:22.372 SPDK_RUN_UBSAN=1 00:01:22.372 NET_TYPE=virt 00:01:22.372 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:22.372 RUN_NIGHTLY=0 15:24:23 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:22.372 15:24:23 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:22.372 15:24:23 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:22.372 15:24:23 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:22.372 15:24:23 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.372 15:24:23 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.372 15:24:23 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.372 15:24:23 -- paths/export.sh@5 -- $ export PATH 00:01:22.372 15:24:23 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.372 15:24:23 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:22.372 15:24:23 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:22.372 15:24:23 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713367463.XXXXXX 00:01:22.372 15:24:23 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713367463.Nza2JI 00:01:22.372 15:24:23 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:22.372 15:24:23 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:01:22.372 15:24:23 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:22.372 15:24:23 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:22.372 15:24:23 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:22.372 15:24:23 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:22.372 15:24:23 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:01:22.372 15:24:23 -- common/autotest_common.sh@10 -- $ set +x 00:01:22.372 15:24:23 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:01:22.372 15:24:23 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:01:22.372 15:24:23 -- pm/common@17 -- $ local monitor 00:01:22.372 15:24:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.372 15:24:23 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=5137 00:01:22.372 15:24:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.372 15:24:23 -- pm/common@21 -- $ date +%s 00:01:22.372 15:24:23 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=5139 00:01:22.372 15:24:23 -- pm/common@26 -- $ sleep 1 00:01:22.372 15:24:23 -- pm/common@21 -- $ date +%s 00:01:22.372 15:24:23 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1713367463 00:01:22.372 15:24:23 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1713367463 00:01:22.372 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1713367463_collect-vmstat.pm.log 00:01:22.372 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1713367463_collect-cpu-load.pm.log 00:01:23.313 15:24:24 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:01:23.313 15:24:24 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:23.313 15:24:24 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:23.313 15:24:24 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:23.313 15:24:24 -- spdk/autobuild.sh@16 -- $ date -u 00:01:23.313 Wed Apr 17 03:24:24 PM UTC 2024 00:01:23.313 15:24:24 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:23.313 v24.05-pre-398-g480afb9a1 00:01:23.313 15:24:24 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:23.313 15:24:24 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:23.313 15:24:24 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:23.313 15:24:24 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:23.313 15:24:24 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:23.313 15:24:24 -- common/autotest_common.sh@10 -- $ set +x 00:01:23.572 ************************************ 00:01:23.572 START TEST ubsan 00:01:23.572 ************************************ 00:01:23.572 using ubsan 00:01:23.572 15:24:24 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:01:23.572 00:01:23.572 real 0m0.000s 00:01:23.572 user 0m0.000s 00:01:23.572 sys 0m0.000s 00:01:23.572 15:24:24 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:01:23.572 15:24:24 -- common/autotest_common.sh@10 -- $ set +x 00:01:23.572 ************************************ 00:01:23.572 END TEST ubsan 00:01:23.572 ************************************ 00:01:23.572 15:24:24 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:23.572 15:24:24 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:23.572 15:24:24 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:23.572 15:24:24 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:23.572 15:24:24 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:23.572 15:24:24 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:23.572 15:24:24 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:23.572 15:24:24 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:23.572 15:24:24 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:01:23.572 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:23.572 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:24.140 Using 'verbs' RDMA provider 00:01:40.014 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:52.215 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:52.215 Creating mk/config.mk...done. 00:01:52.215 Creating mk/cc.flags.mk...done. 00:01:52.215 Type 'make' to build. 00:01:52.215 15:24:52 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:01:52.215 15:24:52 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:52.215 15:24:52 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:52.215 15:24:52 -- common/autotest_common.sh@10 -- $ set +x 00:01:52.215 ************************************ 00:01:52.215 START TEST make 00:01:52.215 ************************************ 00:01:52.215 15:24:52 -- common/autotest_common.sh@1111 -- $ make -j10 00:01:52.215 make[1]: Nothing to be done for 'all'. 00:02:04.416 The Meson build system 00:02:04.416 Version: 1.3.1 00:02:04.416 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:04.416 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:04.416 Build type: native build 00:02:04.416 Program cat found: YES (/usr/bin/cat) 00:02:04.416 Project name: DPDK 00:02:04.416 Project version: 23.11.0 00:02:04.416 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:04.416 C linker for the host machine: cc ld.bfd 2.39-16 00:02:04.416 Host machine cpu family: x86_64 00:02:04.416 Host machine cpu: x86_64 00:02:04.416 Message: ## Building in Developer Mode ## 00:02:04.416 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:04.416 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:04.416 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:04.416 Program python3 found: YES (/usr/bin/python3) 00:02:04.416 Program cat found: YES (/usr/bin/cat) 00:02:04.416 Compiler for C supports arguments -march=native: YES 00:02:04.416 Checking for size of "void *" : 8 00:02:04.416 Checking for size of "void *" : 8 (cached) 00:02:04.416 Library m found: YES 00:02:04.416 Library numa found: YES 00:02:04.416 Has header "numaif.h" : YES 00:02:04.416 Library fdt found: NO 00:02:04.416 Library execinfo found: NO 00:02:04.416 Has header "execinfo.h" : YES 00:02:04.416 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:04.416 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:04.416 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:04.416 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:04.416 Run-time dependency openssl found: YES 3.0.9 00:02:04.416 Run-time dependency libpcap found: YES 1.10.4 00:02:04.416 Has header "pcap.h" with dependency libpcap: YES 00:02:04.416 Compiler for C supports arguments -Wcast-qual: YES 00:02:04.416 Compiler for C supports arguments -Wdeprecated: YES 00:02:04.416 Compiler for C supports arguments -Wformat: YES 00:02:04.416 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:04.416 Compiler for C supports arguments -Wformat-security: NO 00:02:04.416 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:04.416 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:04.416 Compiler for C supports arguments -Wnested-externs: YES 00:02:04.416 Compiler for C supports arguments -Wold-style-definition: YES 00:02:04.416 Compiler for C supports arguments -Wpointer-arith: YES 00:02:04.416 Compiler for C supports arguments -Wsign-compare: YES 00:02:04.416 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:04.416 Compiler for C supports arguments -Wundef: YES 00:02:04.416 Compiler for C supports arguments -Wwrite-strings: YES 00:02:04.416 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:04.416 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:04.416 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:04.416 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:04.416 Program objdump found: YES (/usr/bin/objdump) 00:02:04.416 Compiler for C supports arguments -mavx512f: YES 00:02:04.416 Checking if "AVX512 checking" compiles: YES 00:02:04.416 Fetching value of define "__SSE4_2__" : 1 00:02:04.416 Fetching value of define "__AES__" : 1 00:02:04.416 Fetching value of define "__AVX__" : 1 00:02:04.416 Fetching value of define "__AVX2__" : 1 00:02:04.416 Fetching value of define "__AVX512BW__" : (undefined) 00:02:04.416 Fetching value of define "__AVX512CD__" : (undefined) 00:02:04.416 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:04.416 Fetching value of define "__AVX512F__" : (undefined) 00:02:04.416 Fetching value of define "__AVX512VL__" : (undefined) 00:02:04.416 Fetching value of define "__PCLMUL__" : 1 00:02:04.416 Fetching value of define "__RDRND__" : 1 00:02:04.416 Fetching value of define "__RDSEED__" : 1 00:02:04.416 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:04.416 Fetching value of define "__znver1__" : (undefined) 00:02:04.416 Fetching value of define "__znver2__" : (undefined) 00:02:04.416 Fetching value of define "__znver3__" : (undefined) 00:02:04.416 Fetching value of define "__znver4__" : (undefined) 00:02:04.416 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:04.416 Message: lib/log: Defining dependency "log" 00:02:04.416 Message: lib/kvargs: Defining dependency "kvargs" 00:02:04.416 Message: lib/telemetry: Defining dependency "telemetry" 00:02:04.416 Checking for function "getentropy" : NO 00:02:04.416 Message: lib/eal: Defining dependency "eal" 00:02:04.416 Message: lib/ring: Defining dependency "ring" 00:02:04.416 Message: lib/rcu: Defining dependency "rcu" 00:02:04.416 Message: lib/mempool: Defining dependency "mempool" 00:02:04.416 Message: lib/mbuf: Defining dependency "mbuf" 00:02:04.416 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:04.416 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:04.416 Compiler for C supports arguments -mpclmul: YES 00:02:04.416 Compiler for C supports arguments -maes: YES 00:02:04.416 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:04.416 Compiler for C supports arguments -mavx512bw: YES 00:02:04.416 Compiler for C supports arguments -mavx512dq: YES 00:02:04.416 Compiler for C supports arguments -mavx512vl: YES 00:02:04.416 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:04.416 Compiler for C supports arguments -mavx2: YES 00:02:04.416 Compiler for C supports arguments -mavx: YES 00:02:04.416 Message: lib/net: Defining dependency "net" 00:02:04.416 Message: lib/meter: Defining dependency "meter" 00:02:04.416 Message: lib/ethdev: Defining dependency "ethdev" 00:02:04.416 Message: lib/pci: Defining dependency "pci" 00:02:04.416 Message: lib/cmdline: Defining dependency "cmdline" 00:02:04.416 Message: lib/hash: Defining dependency "hash" 00:02:04.416 Message: lib/timer: Defining dependency "timer" 00:02:04.416 Message: lib/compressdev: Defining dependency "compressdev" 00:02:04.416 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:04.416 Message: lib/dmadev: Defining dependency "dmadev" 00:02:04.416 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:04.416 Message: lib/power: Defining dependency "power" 00:02:04.416 Message: lib/reorder: Defining dependency "reorder" 00:02:04.416 Message: lib/security: Defining dependency "security" 00:02:04.416 Has header "linux/userfaultfd.h" : YES 00:02:04.416 Has header "linux/vduse.h" : YES 00:02:04.416 Message: lib/vhost: Defining dependency "vhost" 00:02:04.416 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:04.416 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:04.416 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:04.416 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:04.416 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:04.416 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:04.416 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:04.416 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:04.416 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:04.416 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:04.416 Program doxygen found: YES (/usr/bin/doxygen) 00:02:04.416 Configuring doxy-api-html.conf using configuration 00:02:04.416 Configuring doxy-api-man.conf using configuration 00:02:04.416 Program mandb found: YES (/usr/bin/mandb) 00:02:04.416 Program sphinx-build found: NO 00:02:04.416 Configuring rte_build_config.h using configuration 00:02:04.416 Message: 00:02:04.416 ================= 00:02:04.416 Applications Enabled 00:02:04.416 ================= 00:02:04.416 00:02:04.416 apps: 00:02:04.416 00:02:04.416 00:02:04.416 Message: 00:02:04.416 ================= 00:02:04.416 Libraries Enabled 00:02:04.416 ================= 00:02:04.416 00:02:04.416 libs: 00:02:04.416 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:04.416 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:04.416 cryptodev, dmadev, power, reorder, security, vhost, 00:02:04.416 00:02:04.416 Message: 00:02:04.416 =============== 00:02:04.416 Drivers Enabled 00:02:04.416 =============== 00:02:04.416 00:02:04.416 common: 00:02:04.416 00:02:04.416 bus: 00:02:04.416 pci, vdev, 00:02:04.416 mempool: 00:02:04.416 ring, 00:02:04.416 dma: 00:02:04.416 00:02:04.416 net: 00:02:04.416 00:02:04.416 crypto: 00:02:04.416 00:02:04.416 compress: 00:02:04.416 00:02:04.416 vdpa: 00:02:04.416 00:02:04.416 00:02:04.416 Message: 00:02:04.416 ================= 00:02:04.416 Content Skipped 00:02:04.417 ================= 00:02:04.417 00:02:04.417 apps: 00:02:04.417 dumpcap: explicitly disabled via build config 00:02:04.417 graph: explicitly disabled via build config 00:02:04.417 pdump: explicitly disabled via build config 00:02:04.417 proc-info: explicitly disabled via build config 00:02:04.417 test-acl: explicitly disabled via build config 00:02:04.417 test-bbdev: explicitly disabled via build config 00:02:04.417 test-cmdline: explicitly disabled via build config 00:02:04.417 test-compress-perf: explicitly disabled via build config 00:02:04.417 test-crypto-perf: explicitly disabled via build config 00:02:04.417 test-dma-perf: explicitly disabled via build config 00:02:04.417 test-eventdev: explicitly disabled via build config 00:02:04.417 test-fib: explicitly disabled via build config 00:02:04.417 test-flow-perf: explicitly disabled via build config 00:02:04.417 test-gpudev: explicitly disabled via build config 00:02:04.417 test-mldev: explicitly disabled via build config 00:02:04.417 test-pipeline: explicitly disabled via build config 00:02:04.417 test-pmd: explicitly disabled via build config 00:02:04.417 test-regex: explicitly disabled via build config 00:02:04.417 test-sad: explicitly disabled via build config 00:02:04.417 test-security-perf: explicitly disabled via build config 00:02:04.417 00:02:04.417 libs: 00:02:04.417 metrics: explicitly disabled via build config 00:02:04.417 acl: explicitly disabled via build config 00:02:04.417 bbdev: explicitly disabled via build config 00:02:04.417 bitratestats: explicitly disabled via build config 00:02:04.417 bpf: explicitly disabled via build config 00:02:04.417 cfgfile: explicitly disabled via build config 00:02:04.417 distributor: explicitly disabled via build config 00:02:04.417 efd: explicitly disabled via build config 00:02:04.417 eventdev: explicitly disabled via build config 00:02:04.417 dispatcher: explicitly disabled via build config 00:02:04.417 gpudev: explicitly disabled via build config 00:02:04.417 gro: explicitly disabled via build config 00:02:04.417 gso: explicitly disabled via build config 00:02:04.417 ip_frag: explicitly disabled via build config 00:02:04.417 jobstats: explicitly disabled via build config 00:02:04.417 latencystats: explicitly disabled via build config 00:02:04.417 lpm: explicitly disabled via build config 00:02:04.417 member: explicitly disabled via build config 00:02:04.417 pcapng: explicitly disabled via build config 00:02:04.417 rawdev: explicitly disabled via build config 00:02:04.417 regexdev: explicitly disabled via build config 00:02:04.417 mldev: explicitly disabled via build config 00:02:04.417 rib: explicitly disabled via build config 00:02:04.417 sched: explicitly disabled via build config 00:02:04.417 stack: explicitly disabled via build config 00:02:04.417 ipsec: explicitly disabled via build config 00:02:04.417 pdcp: explicitly disabled via build config 00:02:04.417 fib: explicitly disabled via build config 00:02:04.417 port: explicitly disabled via build config 00:02:04.417 pdump: explicitly disabled via build config 00:02:04.417 table: explicitly disabled via build config 00:02:04.417 pipeline: explicitly disabled via build config 00:02:04.417 graph: explicitly disabled via build config 00:02:04.417 node: explicitly disabled via build config 00:02:04.417 00:02:04.417 drivers: 00:02:04.417 common/cpt: not in enabled drivers build config 00:02:04.417 common/dpaax: not in enabled drivers build config 00:02:04.417 common/iavf: not in enabled drivers build config 00:02:04.417 common/idpf: not in enabled drivers build config 00:02:04.417 common/mvep: not in enabled drivers build config 00:02:04.417 common/octeontx: not in enabled drivers build config 00:02:04.417 bus/auxiliary: not in enabled drivers build config 00:02:04.417 bus/cdx: not in enabled drivers build config 00:02:04.417 bus/dpaa: not in enabled drivers build config 00:02:04.417 bus/fslmc: not in enabled drivers build config 00:02:04.417 bus/ifpga: not in enabled drivers build config 00:02:04.417 bus/platform: not in enabled drivers build config 00:02:04.417 bus/vmbus: not in enabled drivers build config 00:02:04.417 common/cnxk: not in enabled drivers build config 00:02:04.417 common/mlx5: not in enabled drivers build config 00:02:04.417 common/nfp: not in enabled drivers build config 00:02:04.417 common/qat: not in enabled drivers build config 00:02:04.417 common/sfc_efx: not in enabled drivers build config 00:02:04.417 mempool/bucket: not in enabled drivers build config 00:02:04.417 mempool/cnxk: not in enabled drivers build config 00:02:04.417 mempool/dpaa: not in enabled drivers build config 00:02:04.417 mempool/dpaa2: not in enabled drivers build config 00:02:04.417 mempool/octeontx: not in enabled drivers build config 00:02:04.417 mempool/stack: not in enabled drivers build config 00:02:04.417 dma/cnxk: not in enabled drivers build config 00:02:04.417 dma/dpaa: not in enabled drivers build config 00:02:04.417 dma/dpaa2: not in enabled drivers build config 00:02:04.417 dma/hisilicon: not in enabled drivers build config 00:02:04.417 dma/idxd: not in enabled drivers build config 00:02:04.417 dma/ioat: not in enabled drivers build config 00:02:04.417 dma/skeleton: not in enabled drivers build config 00:02:04.417 net/af_packet: not in enabled drivers build config 00:02:04.417 net/af_xdp: not in enabled drivers build config 00:02:04.417 net/ark: not in enabled drivers build config 00:02:04.417 net/atlantic: not in enabled drivers build config 00:02:04.417 net/avp: not in enabled drivers build config 00:02:04.417 net/axgbe: not in enabled drivers build config 00:02:04.417 net/bnx2x: not in enabled drivers build config 00:02:04.417 net/bnxt: not in enabled drivers build config 00:02:04.417 net/bonding: not in enabled drivers build config 00:02:04.417 net/cnxk: not in enabled drivers build config 00:02:04.417 net/cpfl: not in enabled drivers build config 00:02:04.417 net/cxgbe: not in enabled drivers build config 00:02:04.417 net/dpaa: not in enabled drivers build config 00:02:04.417 net/dpaa2: not in enabled drivers build config 00:02:04.417 net/e1000: not in enabled drivers build config 00:02:04.417 net/ena: not in enabled drivers build config 00:02:04.417 net/enetc: not in enabled drivers build config 00:02:04.417 net/enetfec: not in enabled drivers build config 00:02:04.417 net/enic: not in enabled drivers build config 00:02:04.417 net/failsafe: not in enabled drivers build config 00:02:04.417 net/fm10k: not in enabled drivers build config 00:02:04.417 net/gve: not in enabled drivers build config 00:02:04.417 net/hinic: not in enabled drivers build config 00:02:04.417 net/hns3: not in enabled drivers build config 00:02:04.417 net/i40e: not in enabled drivers build config 00:02:04.417 net/iavf: not in enabled drivers build config 00:02:04.417 net/ice: not in enabled drivers build config 00:02:04.417 net/idpf: not in enabled drivers build config 00:02:04.417 net/igc: not in enabled drivers build config 00:02:04.417 net/ionic: not in enabled drivers build config 00:02:04.417 net/ipn3ke: not in enabled drivers build config 00:02:04.417 net/ixgbe: not in enabled drivers build config 00:02:04.417 net/mana: not in enabled drivers build config 00:02:04.417 net/memif: not in enabled drivers build config 00:02:04.417 net/mlx4: not in enabled drivers build config 00:02:04.417 net/mlx5: not in enabled drivers build config 00:02:04.417 net/mvneta: not in enabled drivers build config 00:02:04.417 net/mvpp2: not in enabled drivers build config 00:02:04.417 net/netvsc: not in enabled drivers build config 00:02:04.417 net/nfb: not in enabled drivers build config 00:02:04.417 net/nfp: not in enabled drivers build config 00:02:04.417 net/ngbe: not in enabled drivers build config 00:02:04.417 net/null: not in enabled drivers build config 00:02:04.417 net/octeontx: not in enabled drivers build config 00:02:04.417 net/octeon_ep: not in enabled drivers build config 00:02:04.417 net/pcap: not in enabled drivers build config 00:02:04.417 net/pfe: not in enabled drivers build config 00:02:04.417 net/qede: not in enabled drivers build config 00:02:04.417 net/ring: not in enabled drivers build config 00:02:04.417 net/sfc: not in enabled drivers build config 00:02:04.417 net/softnic: not in enabled drivers build config 00:02:04.417 net/tap: not in enabled drivers build config 00:02:04.417 net/thunderx: not in enabled drivers build config 00:02:04.417 net/txgbe: not in enabled drivers build config 00:02:04.417 net/vdev_netvsc: not in enabled drivers build config 00:02:04.417 net/vhost: not in enabled drivers build config 00:02:04.417 net/virtio: not in enabled drivers build config 00:02:04.417 net/vmxnet3: not in enabled drivers build config 00:02:04.417 raw/*: missing internal dependency, "rawdev" 00:02:04.417 crypto/armv8: not in enabled drivers build config 00:02:04.417 crypto/bcmfs: not in enabled drivers build config 00:02:04.417 crypto/caam_jr: not in enabled drivers build config 00:02:04.417 crypto/ccp: not in enabled drivers build config 00:02:04.417 crypto/cnxk: not in enabled drivers build config 00:02:04.417 crypto/dpaa_sec: not in enabled drivers build config 00:02:04.417 crypto/dpaa2_sec: not in enabled drivers build config 00:02:04.417 crypto/ipsec_mb: not in enabled drivers build config 00:02:04.417 crypto/mlx5: not in enabled drivers build config 00:02:04.417 crypto/mvsam: not in enabled drivers build config 00:02:04.417 crypto/nitrox: not in enabled drivers build config 00:02:04.417 crypto/null: not in enabled drivers build config 00:02:04.417 crypto/octeontx: not in enabled drivers build config 00:02:04.417 crypto/openssl: not in enabled drivers build config 00:02:04.417 crypto/scheduler: not in enabled drivers build config 00:02:04.417 crypto/uadk: not in enabled drivers build config 00:02:04.417 crypto/virtio: not in enabled drivers build config 00:02:04.417 compress/isal: not in enabled drivers build config 00:02:04.417 compress/mlx5: not in enabled drivers build config 00:02:04.417 compress/octeontx: not in enabled drivers build config 00:02:04.417 compress/zlib: not in enabled drivers build config 00:02:04.417 regex/*: missing internal dependency, "regexdev" 00:02:04.417 ml/*: missing internal dependency, "mldev" 00:02:04.417 vdpa/ifc: not in enabled drivers build config 00:02:04.417 vdpa/mlx5: not in enabled drivers build config 00:02:04.417 vdpa/nfp: not in enabled drivers build config 00:02:04.417 vdpa/sfc: not in enabled drivers build config 00:02:04.417 event/*: missing internal dependency, "eventdev" 00:02:04.417 baseband/*: missing internal dependency, "bbdev" 00:02:04.417 gpu/*: missing internal dependency, "gpudev" 00:02:04.417 00:02:04.417 00:02:04.417 Build targets in project: 85 00:02:04.417 00:02:04.417 DPDK 23.11.0 00:02:04.417 00:02:04.417 User defined options 00:02:04.417 buildtype : debug 00:02:04.418 default_library : shared 00:02:04.418 libdir : lib 00:02:04.418 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:04.418 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:04.418 c_link_args : 00:02:04.418 cpu_instruction_set: native 00:02:04.418 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:04.418 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:04.418 enable_docs : false 00:02:04.418 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:04.418 enable_kmods : false 00:02:04.418 tests : false 00:02:04.418 00:02:04.418 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:04.418 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:04.418 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:04.418 [2/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:04.418 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:04.418 [4/265] Linking static target lib/librte_kvargs.a 00:02:04.418 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:04.418 [6/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:04.418 [7/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:04.418 [8/265] Linking static target lib/librte_log.a 00:02:04.418 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:04.418 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:04.418 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.418 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:04.418 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:04.418 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:04.418 [15/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:04.418 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:04.418 [17/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.418 [18/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:04.418 [19/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:04.418 [20/265] Linking static target lib/librte_telemetry.a 00:02:04.418 [21/265] Linking target lib/librte_log.so.24.0 00:02:04.418 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:04.418 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:04.676 [24/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:04.676 [25/265] Linking target lib/librte_kvargs.so.24.0 00:02:04.676 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:04.935 [27/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:04.935 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:04.935 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:05.193 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:05.193 [31/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.193 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:05.193 [33/265] Linking target lib/librte_telemetry.so.24.0 00:02:05.193 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:05.193 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:05.451 [36/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:05.451 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:05.451 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:05.451 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:05.451 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:05.451 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:05.710 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:05.710 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:05.968 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:05.968 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:06.226 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:06.226 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:06.226 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:06.226 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:06.484 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:06.484 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:06.484 [52/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:06.484 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:06.484 [54/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:06.484 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:06.743 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:07.002 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:07.002 [58/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:07.002 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:07.002 [60/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:07.261 [61/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:07.261 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:07.261 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:07.261 [64/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:07.261 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:07.519 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:07.519 [67/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:07.519 [68/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:07.778 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:07.778 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:08.036 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:08.036 [72/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:08.036 [73/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:08.036 [74/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:08.036 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:08.036 [76/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:08.300 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:08.300 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:08.561 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:08.561 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:08.561 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:08.561 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:08.819 [83/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:08.819 [84/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:08.819 [85/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:08.819 [86/265] Linking static target lib/librte_ring.a 00:02:08.819 [87/265] Linking static target lib/librte_eal.a 00:02:09.386 [88/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:09.386 [89/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:09.386 [90/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:09.386 [91/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.386 [92/265] Linking static target lib/librte_rcu.a 00:02:09.386 [93/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:09.676 [94/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:09.676 [95/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:09.676 [96/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:09.676 [97/265] Linking static target lib/librte_mempool.a 00:02:09.935 [98/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:09.935 [99/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.935 [100/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:09.935 [101/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:09.935 [102/265] Linking static target lib/librte_mbuf.a 00:02:09.935 [103/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:10.194 [104/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:10.194 [105/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:10.194 [106/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:10.453 [107/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:10.453 [108/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:10.453 [109/265] Linking static target lib/librte_net.a 00:02:10.711 [110/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:10.711 [111/265] Linking static target lib/librte_meter.a 00:02:10.971 [112/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:10.971 [113/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.971 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:10.971 [115/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.230 [116/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.230 [117/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.230 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:11.231 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:11.489 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:11.748 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:11.748 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:12.007 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:12.007 [124/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:12.007 [125/265] Linking static target lib/librte_pci.a 00:02:12.265 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:12.265 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:12.265 [128/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:12.524 [129/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:12.524 [130/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:12.524 [131/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:12.524 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:12.524 [133/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.524 [134/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:12.524 [135/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:12.524 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:12.524 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:12.524 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:12.524 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:12.524 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:12.524 [141/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:12.524 [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:12.782 [143/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:13.039 [144/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:13.039 [145/265] Linking static target lib/librte_ethdev.a 00:02:13.039 [146/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:13.039 [147/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:13.039 [148/265] Linking static target lib/librte_cmdline.a 00:02:13.297 [149/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:13.297 [150/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:13.297 [151/265] Linking static target lib/librte_timer.a 00:02:13.297 [152/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:13.582 [153/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:13.582 [154/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:13.841 [155/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:13.841 [156/265] Linking static target lib/librte_compressdev.a 00:02:13.841 [157/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:14.100 [158/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.100 [159/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:14.100 [160/265] Linking static target lib/librte_hash.a 00:02:14.100 [161/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:14.358 [162/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:14.358 [163/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:14.616 [164/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:14.616 [165/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:14.616 [166/265] Linking static target lib/librte_dmadev.a 00:02:14.616 [167/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:14.874 [168/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:14.874 [169/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.874 [170/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.874 [171/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:14.874 [172/265] Linking static target lib/librte_cryptodev.a 00:02:14.874 [173/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:15.132 [174/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.132 [175/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:15.132 [176/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.391 [177/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:15.391 [178/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:15.391 [179/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:15.391 [180/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:15.649 [181/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:15.908 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:15.908 [183/265] Linking static target lib/librte_power.a 00:02:15.908 [184/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:15.908 [185/265] Linking static target lib/librte_reorder.a 00:02:16.166 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:16.166 [187/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:16.166 [188/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:16.166 [189/265] Linking static target lib/librte_security.a 00:02:16.166 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:16.424 [191/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.682 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:16.942 [193/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.942 [194/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.942 [195/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:16.942 [196/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:16.942 [197/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:17.201 [198/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.201 [199/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:17.459 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:17.459 [201/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:17.459 [202/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:17.718 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:17.718 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:17.718 [205/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:17.718 [206/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:17.975 [207/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:17.975 [208/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:17.975 [209/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:17.975 [210/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:17.975 [211/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:17.975 [212/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:17.975 [213/265] Linking static target drivers/librte_bus_vdev.a 00:02:17.975 [214/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:17.975 [215/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:17.975 [216/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:17.975 [217/265] Linking static target drivers/librte_bus_pci.a 00:02:18.233 [218/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:18.233 [219/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:18.233 [220/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.495 [221/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:18.495 [222/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:18.495 [223/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:18.495 [224/265] Linking static target drivers/librte_mempool_ring.a 00:02:18.495 [225/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.434 [226/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:19.434 [227/265] Linking static target lib/librte_vhost.a 00:02:20.000 [228/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.000 [229/265] Linking target lib/librte_eal.so.24.0 00:02:20.259 [230/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:20.259 [231/265] Linking target lib/librte_ring.so.24.0 00:02:20.259 [232/265] Linking target lib/librte_dmadev.so.24.0 00:02:20.259 [233/265] Linking target lib/librte_meter.so.24.0 00:02:20.259 [234/265] Linking target lib/librte_pci.so.24.0 00:02:20.259 [235/265] Linking target lib/librte_timer.so.24.0 00:02:20.259 [236/265] Linking target drivers/librte_bus_vdev.so.24.0 00:02:20.517 [237/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.517 [238/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:20.517 [239/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:20.517 [240/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:20.517 [241/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:20.517 [242/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:20.517 [243/265] Linking target lib/librte_rcu.so.24.0 00:02:20.517 [244/265] Linking target lib/librte_mempool.so.24.0 00:02:20.517 [245/265] Linking target drivers/librte_bus_pci.so.24.0 00:02:20.517 [246/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.775 [247/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:20.775 [248/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:20.775 [249/265] Linking target lib/librte_mbuf.so.24.0 00:02:20.775 [250/265] Linking target drivers/librte_mempool_ring.so.24.0 00:02:20.775 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:21.033 [252/265] Linking target lib/librte_net.so.24.0 00:02:21.033 [253/265] Linking target lib/librte_compressdev.so.24.0 00:02:21.033 [254/265] Linking target lib/librte_reorder.so.24.0 00:02:21.033 [255/265] Linking target lib/librte_cryptodev.so.24.0 00:02:21.033 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:21.033 [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:21.033 [258/265] Linking target lib/librte_hash.so.24.0 00:02:21.033 [259/265] Linking target lib/librte_cmdline.so.24.0 00:02:21.033 [260/265] Linking target lib/librte_security.so.24.0 00:02:21.033 [261/265] Linking target lib/librte_ethdev.so.24.0 00:02:21.291 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:21.292 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:21.292 [264/265] Linking target lib/librte_vhost.so.24.0 00:02:21.292 [265/265] Linking target lib/librte_power.so.24.0 00:02:21.292 INFO: autodetecting backend as ninja 00:02:21.292 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:22.667 CC lib/log/log.o 00:02:22.667 CC lib/log/log_deprecated.o 00:02:22.667 CC lib/log/log_flags.o 00:02:22.667 CC lib/ut_mock/mock.o 00:02:22.667 CC lib/ut/ut.o 00:02:22.667 LIB libspdk_ut_mock.a 00:02:22.667 SO libspdk_ut_mock.so.6.0 00:02:22.667 LIB libspdk_log.a 00:02:22.667 LIB libspdk_ut.a 00:02:22.667 SYMLINK libspdk_ut_mock.so 00:02:22.925 SO libspdk_log.so.7.0 00:02:22.925 SO libspdk_ut.so.2.0 00:02:22.925 SYMLINK libspdk_ut.so 00:02:22.925 SYMLINK libspdk_log.so 00:02:23.183 CC lib/dma/dma.o 00:02:23.183 CC lib/ioat/ioat.o 00:02:23.183 CC lib/util/base64.o 00:02:23.183 CC lib/util/bit_array.o 00:02:23.183 CC lib/util/cpuset.o 00:02:23.183 CC lib/util/crc16.o 00:02:23.183 CXX lib/trace_parser/trace.o 00:02:23.183 CC lib/util/crc32.o 00:02:23.183 CC lib/util/crc32c.o 00:02:23.183 CC lib/vfio_user/host/vfio_user_pci.o 00:02:23.183 CC lib/util/crc32_ieee.o 00:02:23.183 CC lib/util/crc64.o 00:02:23.183 CC lib/util/dif.o 00:02:23.183 CC lib/vfio_user/host/vfio_user.o 00:02:23.183 LIB libspdk_dma.a 00:02:23.183 CC lib/util/fd.o 00:02:23.441 SO libspdk_dma.so.4.0 00:02:23.441 CC lib/util/file.o 00:02:23.441 CC lib/util/hexlify.o 00:02:23.441 SYMLINK libspdk_dma.so 00:02:23.441 CC lib/util/iov.o 00:02:23.441 CC lib/util/math.o 00:02:23.441 LIB libspdk_ioat.a 00:02:23.441 CC lib/util/pipe.o 00:02:23.441 SO libspdk_ioat.so.7.0 00:02:23.441 CC lib/util/strerror_tls.o 00:02:23.441 SYMLINK libspdk_ioat.so 00:02:23.441 CC lib/util/string.o 00:02:23.441 CC lib/util/uuid.o 00:02:23.441 LIB libspdk_vfio_user.a 00:02:23.700 SO libspdk_vfio_user.so.5.0 00:02:23.700 CC lib/util/fd_group.o 00:02:23.700 CC lib/util/xor.o 00:02:23.700 CC lib/util/zipf.o 00:02:23.700 SYMLINK libspdk_vfio_user.so 00:02:23.960 LIB libspdk_util.a 00:02:23.960 SO libspdk_util.so.9.0 00:02:24.219 SYMLINK libspdk_util.so 00:02:24.219 LIB libspdk_trace_parser.a 00:02:24.219 SO libspdk_trace_parser.so.5.0 00:02:24.219 CC lib/json/json_parse.o 00:02:24.219 CC lib/json/json_util.o 00:02:24.219 CC lib/conf/conf.o 00:02:24.219 CC lib/json/json_write.o 00:02:24.219 SYMLINK libspdk_trace_parser.so 00:02:24.477 CC lib/env_dpdk/env.o 00:02:24.477 CC lib/env_dpdk/memory.o 00:02:24.477 CC lib/rdma/common.o 00:02:24.477 CC lib/env_dpdk/pci.o 00:02:24.477 CC lib/idxd/idxd.o 00:02:24.477 CC lib/vmd/vmd.o 00:02:24.477 LIB libspdk_conf.a 00:02:24.477 CC lib/idxd/idxd_user.o 00:02:24.477 CC lib/env_dpdk/init.o 00:02:24.736 SO libspdk_conf.so.6.0 00:02:24.736 CC lib/rdma/rdma_verbs.o 00:02:24.736 LIB libspdk_json.a 00:02:24.736 SYMLINK libspdk_conf.so 00:02:24.736 CC lib/env_dpdk/threads.o 00:02:24.736 SO libspdk_json.so.6.0 00:02:24.736 CC lib/env_dpdk/pci_ioat.o 00:02:24.736 SYMLINK libspdk_json.so 00:02:24.736 CC lib/vmd/led.o 00:02:24.736 CC lib/env_dpdk/pci_virtio.o 00:02:24.995 LIB libspdk_rdma.a 00:02:24.995 CC lib/env_dpdk/pci_vmd.o 00:02:24.995 LIB libspdk_idxd.a 00:02:24.995 SO libspdk_rdma.so.6.0 00:02:24.995 SO libspdk_idxd.so.12.0 00:02:24.995 CC lib/env_dpdk/pci_idxd.o 00:02:24.995 CC lib/env_dpdk/pci_event.o 00:02:24.995 SYMLINK libspdk_rdma.so 00:02:24.995 CC lib/env_dpdk/sigbus_handler.o 00:02:24.995 CC lib/env_dpdk/pci_dpdk.o 00:02:24.995 SYMLINK libspdk_idxd.so 00:02:24.995 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:24.995 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:24.995 LIB libspdk_vmd.a 00:02:24.995 CC lib/jsonrpc/jsonrpc_server.o 00:02:24.995 SO libspdk_vmd.so.6.0 00:02:24.995 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:25.253 SYMLINK libspdk_vmd.so 00:02:25.253 CC lib/jsonrpc/jsonrpc_client.o 00:02:25.253 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:25.512 LIB libspdk_jsonrpc.a 00:02:25.512 SO libspdk_jsonrpc.so.6.0 00:02:25.512 SYMLINK libspdk_jsonrpc.so 00:02:25.770 CC lib/rpc/rpc.o 00:02:25.770 LIB libspdk_env_dpdk.a 00:02:25.770 SO libspdk_env_dpdk.so.14.0 00:02:26.027 LIB libspdk_rpc.a 00:02:26.027 SO libspdk_rpc.so.6.0 00:02:26.027 SYMLINK libspdk_env_dpdk.so 00:02:26.027 SYMLINK libspdk_rpc.so 00:02:26.285 CC lib/keyring/keyring.o 00:02:26.285 CC lib/keyring/keyring_rpc.o 00:02:26.285 CC lib/trace/trace.o 00:02:26.285 CC lib/trace/trace_flags.o 00:02:26.285 CC lib/trace/trace_rpc.o 00:02:26.285 CC lib/notify/notify.o 00:02:26.285 CC lib/notify/notify_rpc.o 00:02:26.543 LIB libspdk_notify.a 00:02:26.543 LIB libspdk_keyring.a 00:02:26.543 LIB libspdk_trace.a 00:02:26.543 SO libspdk_notify.so.6.0 00:02:26.543 SO libspdk_trace.so.10.0 00:02:26.543 SO libspdk_keyring.so.1.0 00:02:26.802 SYMLINK libspdk_notify.so 00:02:26.802 SYMLINK libspdk_keyring.so 00:02:26.802 SYMLINK libspdk_trace.so 00:02:26.802 CC lib/thread/thread.o 00:02:27.061 CC lib/thread/iobuf.o 00:02:27.061 CC lib/sock/sock.o 00:02:27.061 CC lib/sock/sock_rpc.o 00:02:27.326 LIB libspdk_sock.a 00:02:27.326 SO libspdk_sock.so.9.0 00:02:27.598 SYMLINK libspdk_sock.so 00:02:27.857 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:27.857 CC lib/nvme/nvme_ctrlr.o 00:02:27.857 CC lib/nvme/nvme_fabric.o 00:02:27.857 CC lib/nvme/nvme_ns.o 00:02:27.857 CC lib/nvme/nvme_ns_cmd.o 00:02:27.857 CC lib/nvme/nvme_pcie_common.o 00:02:27.857 CC lib/nvme/nvme_pcie.o 00:02:27.857 CC lib/nvme/nvme_qpair.o 00:02:27.857 CC lib/nvme/nvme.o 00:02:28.791 LIB libspdk_thread.a 00:02:28.791 SO libspdk_thread.so.10.0 00:02:28.791 CC lib/nvme/nvme_quirks.o 00:02:28.791 CC lib/nvme/nvme_transport.o 00:02:28.791 SYMLINK libspdk_thread.so 00:02:28.791 CC lib/nvme/nvme_discovery.o 00:02:28.791 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:28.791 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:28.791 CC lib/nvme/nvme_tcp.o 00:02:28.791 CC lib/nvme/nvme_opal.o 00:02:28.791 CC lib/nvme/nvme_io_msg.o 00:02:29.051 CC lib/nvme/nvme_poll_group.o 00:02:29.309 CC lib/nvme/nvme_zns.o 00:02:29.309 CC lib/nvme/nvme_stubs.o 00:02:29.568 CC lib/accel/accel.o 00:02:29.568 CC lib/accel/accel_rpc.o 00:02:29.568 CC lib/blob/blobstore.o 00:02:29.568 CC lib/init/json_config.o 00:02:29.568 CC lib/blob/request.o 00:02:29.826 CC lib/blob/zeroes.o 00:02:29.826 CC lib/init/subsystem.o 00:02:29.826 CC lib/nvme/nvme_auth.o 00:02:29.826 CC lib/nvme/nvme_cuse.o 00:02:29.826 CC lib/nvme/nvme_rdma.o 00:02:30.085 CC lib/init/subsystem_rpc.o 00:02:30.085 CC lib/init/rpc.o 00:02:30.085 CC lib/blob/blob_bs_dev.o 00:02:30.085 CC lib/accel/accel_sw.o 00:02:30.085 CC lib/virtio/virtio.o 00:02:30.343 LIB libspdk_init.a 00:02:30.343 SO libspdk_init.so.5.0 00:02:30.343 SYMLINK libspdk_init.so 00:02:30.343 CC lib/virtio/virtio_vhost_user.o 00:02:30.343 CC lib/virtio/virtio_vfio_user.o 00:02:30.602 CC lib/virtio/virtio_pci.o 00:02:30.602 LIB libspdk_accel.a 00:02:30.602 CC lib/event/app.o 00:02:30.602 CC lib/event/reactor.o 00:02:30.602 SO libspdk_accel.so.15.0 00:02:30.602 SYMLINK libspdk_accel.so 00:02:30.602 CC lib/event/log_rpc.o 00:02:30.602 CC lib/event/app_rpc.o 00:02:30.861 CC lib/event/scheduler_static.o 00:02:30.861 LIB libspdk_virtio.a 00:02:30.861 SO libspdk_virtio.so.7.0 00:02:30.861 CC lib/bdev/bdev.o 00:02:30.861 CC lib/bdev/bdev_zone.o 00:02:30.861 CC lib/bdev/bdev_rpc.o 00:02:30.861 CC lib/bdev/part.o 00:02:30.861 CC lib/bdev/scsi_nvme.o 00:02:30.861 SYMLINK libspdk_virtio.so 00:02:30.861 LIB libspdk_event.a 00:02:31.120 SO libspdk_event.so.13.0 00:02:31.120 SYMLINK libspdk_event.so 00:02:31.379 LIB libspdk_nvme.a 00:02:31.640 SO libspdk_nvme.so.13.0 00:02:31.927 SYMLINK libspdk_nvme.so 00:02:32.494 LIB libspdk_blob.a 00:02:32.494 SO libspdk_blob.so.11.0 00:02:32.752 SYMLINK libspdk_blob.so 00:02:33.010 CC lib/blobfs/tree.o 00:02:33.010 CC lib/lvol/lvol.o 00:02:33.010 CC lib/blobfs/blobfs.o 00:02:33.576 LIB libspdk_bdev.a 00:02:33.835 SO libspdk_bdev.so.15.0 00:02:33.835 LIB libspdk_blobfs.a 00:02:33.835 SO libspdk_blobfs.so.10.0 00:02:33.835 SYMLINK libspdk_bdev.so 00:02:33.835 LIB libspdk_lvol.a 00:02:33.835 SYMLINK libspdk_blobfs.so 00:02:33.835 SO libspdk_lvol.so.10.0 00:02:34.093 SYMLINK libspdk_lvol.so 00:02:34.093 CC lib/nvmf/ctrlr.o 00:02:34.093 CC lib/nvmf/ctrlr_discovery.o 00:02:34.093 CC lib/nvmf/ctrlr_bdev.o 00:02:34.093 CC lib/nbd/nbd.o 00:02:34.093 CC lib/nvmf/subsystem.o 00:02:34.093 CC lib/nvmf/nvmf.o 00:02:34.093 CC lib/nbd/nbd_rpc.o 00:02:34.093 CC lib/ftl/ftl_core.o 00:02:34.093 CC lib/ublk/ublk.o 00:02:34.093 CC lib/scsi/dev.o 00:02:34.351 CC lib/nvmf/nvmf_rpc.o 00:02:34.351 CC lib/scsi/lun.o 00:02:34.608 CC lib/ftl/ftl_init.o 00:02:34.608 LIB libspdk_nbd.a 00:02:34.608 SO libspdk_nbd.so.7.0 00:02:34.608 SYMLINK libspdk_nbd.so 00:02:34.608 CC lib/ublk/ublk_rpc.o 00:02:34.608 CC lib/nvmf/transport.o 00:02:34.867 CC lib/nvmf/tcp.o 00:02:34.867 CC lib/ftl/ftl_layout.o 00:02:34.867 CC lib/scsi/port.o 00:02:34.867 CC lib/ftl/ftl_debug.o 00:02:34.867 LIB libspdk_ublk.a 00:02:34.867 SO libspdk_ublk.so.3.0 00:02:34.867 CC lib/nvmf/rdma.o 00:02:35.125 SYMLINK libspdk_ublk.so 00:02:35.125 CC lib/scsi/scsi.o 00:02:35.125 CC lib/scsi/scsi_bdev.o 00:02:35.125 CC lib/ftl/ftl_io.o 00:02:35.125 CC lib/scsi/scsi_pr.o 00:02:35.125 CC lib/ftl/ftl_sb.o 00:02:35.125 CC lib/scsi/scsi_rpc.o 00:02:35.384 CC lib/scsi/task.o 00:02:35.384 CC lib/ftl/ftl_l2p.o 00:02:35.384 CC lib/ftl/ftl_l2p_flat.o 00:02:35.384 CC lib/ftl/ftl_nv_cache.o 00:02:35.384 CC lib/ftl/ftl_band.o 00:02:35.384 CC lib/ftl/ftl_band_ops.o 00:02:35.384 CC lib/ftl/ftl_writer.o 00:02:35.643 LIB libspdk_scsi.a 00:02:35.643 CC lib/ftl/ftl_rq.o 00:02:35.643 CC lib/ftl/ftl_reloc.o 00:02:35.643 SO libspdk_scsi.so.9.0 00:02:35.643 SYMLINK libspdk_scsi.so 00:02:35.643 CC lib/ftl/ftl_l2p_cache.o 00:02:35.900 CC lib/ftl/ftl_p2l.o 00:02:35.900 CC lib/ftl/mngt/ftl_mngt.o 00:02:35.900 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:35.900 CC lib/iscsi/conn.o 00:02:35.900 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:35.900 CC lib/vhost/vhost.o 00:02:36.158 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:36.158 CC lib/vhost/vhost_rpc.o 00:02:36.158 CC lib/vhost/vhost_scsi.o 00:02:36.158 CC lib/iscsi/init_grp.o 00:02:36.158 CC lib/iscsi/iscsi.o 00:02:36.416 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:36.416 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:36.416 CC lib/vhost/vhost_blk.o 00:02:36.416 CC lib/iscsi/md5.o 00:02:36.674 CC lib/vhost/rte_vhost_user.o 00:02:36.674 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:36.674 CC lib/iscsi/param.o 00:02:36.674 CC lib/iscsi/portal_grp.o 00:02:36.675 CC lib/iscsi/tgt_node.o 00:02:36.933 CC lib/iscsi/iscsi_subsystem.o 00:02:36.933 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:36.933 CC lib/iscsi/iscsi_rpc.o 00:02:36.933 CC lib/iscsi/task.o 00:02:36.933 LIB libspdk_nvmf.a 00:02:36.933 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:37.217 SO libspdk_nvmf.so.18.0 00:02:37.217 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:37.217 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:37.217 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:37.217 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:37.217 SYMLINK libspdk_nvmf.so 00:02:37.217 CC lib/ftl/utils/ftl_conf.o 00:02:37.491 CC lib/ftl/utils/ftl_md.o 00:02:37.491 CC lib/ftl/utils/ftl_mempool.o 00:02:37.491 CC lib/ftl/utils/ftl_bitmap.o 00:02:37.492 CC lib/ftl/utils/ftl_property.o 00:02:37.492 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:37.492 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:37.492 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:37.492 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:37.492 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:37.492 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:37.749 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:37.749 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:37.749 LIB libspdk_iscsi.a 00:02:37.749 LIB libspdk_vhost.a 00:02:37.749 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:37.749 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:37.749 SO libspdk_vhost.so.8.0 00:02:37.749 CC lib/ftl/base/ftl_base_dev.o 00:02:37.749 SO libspdk_iscsi.so.8.0 00:02:37.749 CC lib/ftl/base/ftl_base_bdev.o 00:02:37.749 CC lib/ftl/ftl_trace.o 00:02:37.749 SYMLINK libspdk_vhost.so 00:02:38.007 SYMLINK libspdk_iscsi.so 00:02:38.007 LIB libspdk_ftl.a 00:02:38.265 SO libspdk_ftl.so.9.0 00:02:38.522 SYMLINK libspdk_ftl.so 00:02:39.088 CC module/env_dpdk/env_dpdk_rpc.o 00:02:39.088 CC module/keyring/file/keyring.o 00:02:39.088 CC module/accel/iaa/accel_iaa.o 00:02:39.088 CC module/sock/posix/posix.o 00:02:39.088 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:39.088 CC module/accel/ioat/accel_ioat.o 00:02:39.088 CC module/accel/error/accel_error.o 00:02:39.088 CC module/accel/dsa/accel_dsa.o 00:02:39.088 CC module/blob/bdev/blob_bdev.o 00:02:39.088 CC module/sock/uring/uring.o 00:02:39.088 LIB libspdk_env_dpdk_rpc.a 00:02:39.088 SO libspdk_env_dpdk_rpc.so.6.0 00:02:39.347 CC module/keyring/file/keyring_rpc.o 00:02:39.347 SYMLINK libspdk_env_dpdk_rpc.so 00:02:39.347 CC module/accel/ioat/accel_ioat_rpc.o 00:02:39.347 CC module/accel/error/accel_error_rpc.o 00:02:39.347 CC module/accel/iaa/accel_iaa_rpc.o 00:02:39.347 LIB libspdk_scheduler_dynamic.a 00:02:39.347 CC module/accel/dsa/accel_dsa_rpc.o 00:02:39.347 SO libspdk_scheduler_dynamic.so.4.0 00:02:39.347 LIB libspdk_blob_bdev.a 00:02:39.347 LIB libspdk_keyring_file.a 00:02:39.347 SO libspdk_blob_bdev.so.11.0 00:02:39.347 SYMLINK libspdk_scheduler_dynamic.so 00:02:39.347 LIB libspdk_accel_ioat.a 00:02:39.347 SO libspdk_keyring_file.so.1.0 00:02:39.347 LIB libspdk_accel_error.a 00:02:39.347 LIB libspdk_accel_iaa.a 00:02:39.347 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:39.347 SO libspdk_accel_ioat.so.6.0 00:02:39.347 SO libspdk_accel_error.so.2.0 00:02:39.347 LIB libspdk_accel_dsa.a 00:02:39.347 SYMLINK libspdk_blob_bdev.so 00:02:39.347 SO libspdk_accel_iaa.so.3.0 00:02:39.605 SYMLINK libspdk_keyring_file.so 00:02:39.605 SO libspdk_accel_dsa.so.5.0 00:02:39.605 SYMLINK libspdk_accel_error.so 00:02:39.605 SYMLINK libspdk_accel_ioat.so 00:02:39.605 SYMLINK libspdk_accel_iaa.so 00:02:39.605 SYMLINK libspdk_accel_dsa.so 00:02:39.605 CC module/scheduler/gscheduler/gscheduler.o 00:02:39.605 LIB libspdk_scheduler_dpdk_governor.a 00:02:39.605 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:39.863 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:39.863 LIB libspdk_scheduler_gscheduler.a 00:02:39.863 CC module/bdev/malloc/bdev_malloc.o 00:02:39.863 CC module/bdev/gpt/gpt.o 00:02:39.863 CC module/bdev/lvol/vbdev_lvol.o 00:02:39.863 CC module/bdev/delay/vbdev_delay.o 00:02:39.863 CC module/blobfs/bdev/blobfs_bdev.o 00:02:39.863 CC module/bdev/error/vbdev_error.o 00:02:39.863 SO libspdk_scheduler_gscheduler.so.4.0 00:02:39.863 LIB libspdk_sock_uring.a 00:02:39.863 LIB libspdk_sock_posix.a 00:02:39.863 SO libspdk_sock_uring.so.5.0 00:02:39.863 SYMLINK libspdk_scheduler_gscheduler.so 00:02:39.863 CC module/bdev/error/vbdev_error_rpc.o 00:02:39.863 SO libspdk_sock_posix.so.6.0 00:02:39.863 SYMLINK libspdk_sock_uring.so 00:02:39.863 CC module/bdev/gpt/vbdev_gpt.o 00:02:39.863 CC module/bdev/null/bdev_null.o 00:02:39.863 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:39.863 SYMLINK libspdk_sock_posix.so 00:02:40.123 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:40.123 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:40.123 LIB libspdk_bdev_error.a 00:02:40.123 SO libspdk_bdev_error.so.6.0 00:02:40.123 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:40.123 CC module/bdev/nvme/bdev_nvme.o 00:02:40.123 CC module/bdev/null/bdev_null_rpc.o 00:02:40.123 LIB libspdk_bdev_delay.a 00:02:40.123 LIB libspdk_blobfs_bdev.a 00:02:40.123 SO libspdk_bdev_delay.so.6.0 00:02:40.123 SYMLINK libspdk_bdev_error.so 00:02:40.123 LIB libspdk_bdev_gpt.a 00:02:40.123 SO libspdk_blobfs_bdev.so.6.0 00:02:40.381 SO libspdk_bdev_gpt.so.6.0 00:02:40.381 SYMLINK libspdk_bdev_delay.so 00:02:40.381 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:40.381 SYMLINK libspdk_blobfs_bdev.so 00:02:40.381 SYMLINK libspdk_bdev_gpt.so 00:02:40.381 LIB libspdk_bdev_malloc.a 00:02:40.381 LIB libspdk_bdev_null.a 00:02:40.381 SO libspdk_bdev_malloc.so.6.0 00:02:40.381 CC module/bdev/raid/bdev_raid.o 00:02:40.381 CC module/bdev/passthru/vbdev_passthru.o 00:02:40.381 LIB libspdk_bdev_lvol.a 00:02:40.381 SO libspdk_bdev_null.so.6.0 00:02:40.381 SO libspdk_bdev_lvol.so.6.0 00:02:40.381 SYMLINK libspdk_bdev_malloc.so 00:02:40.381 CC module/bdev/split/vbdev_split.o 00:02:40.381 SYMLINK libspdk_bdev_null.so 00:02:40.381 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:40.640 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:40.640 CC module/bdev/uring/bdev_uring.o 00:02:40.640 SYMLINK libspdk_bdev_lvol.so 00:02:40.640 CC module/bdev/split/vbdev_split_rpc.o 00:02:40.640 CC module/bdev/aio/bdev_aio.o 00:02:40.640 CC module/bdev/raid/bdev_raid_rpc.o 00:02:40.640 LIB libspdk_bdev_passthru.a 00:02:40.640 CC module/bdev/raid/bdev_raid_sb.o 00:02:40.640 SO libspdk_bdev_passthru.so.6.0 00:02:40.640 LIB libspdk_bdev_split.a 00:02:40.899 SO libspdk_bdev_split.so.6.0 00:02:40.899 SYMLINK libspdk_bdev_passthru.so 00:02:40.899 CC module/bdev/raid/raid0.o 00:02:40.899 SYMLINK libspdk_bdev_split.so 00:02:40.899 CC module/bdev/uring/bdev_uring_rpc.o 00:02:40.899 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:40.899 CC module/bdev/aio/bdev_aio_rpc.o 00:02:40.899 CC module/bdev/nvme/nvme_rpc.o 00:02:41.158 CC module/bdev/raid/raid1.o 00:02:41.158 CC module/bdev/nvme/bdev_mdns_client.o 00:02:41.158 CC module/bdev/ftl/bdev_ftl.o 00:02:41.158 LIB libspdk_bdev_zone_block.a 00:02:41.158 LIB libspdk_bdev_uring.a 00:02:41.158 LIB libspdk_bdev_aio.a 00:02:41.158 SO libspdk_bdev_zone_block.so.6.0 00:02:41.158 CC module/bdev/iscsi/bdev_iscsi.o 00:02:41.158 SO libspdk_bdev_aio.so.6.0 00:02:41.158 SO libspdk_bdev_uring.so.6.0 00:02:41.158 SYMLINK libspdk_bdev_zone_block.so 00:02:41.158 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:41.158 CC module/bdev/nvme/vbdev_opal.o 00:02:41.158 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:41.416 SYMLINK libspdk_bdev_uring.so 00:02:41.416 SYMLINK libspdk_bdev_aio.so 00:02:41.416 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:41.416 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:41.416 CC module/bdev/raid/concat.o 00:02:41.416 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:41.416 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:41.416 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:41.416 LIB libspdk_bdev_ftl.a 00:02:41.675 SO libspdk_bdev_ftl.so.6.0 00:02:41.675 LIB libspdk_bdev_iscsi.a 00:02:41.675 LIB libspdk_bdev_raid.a 00:02:41.675 SYMLINK libspdk_bdev_ftl.so 00:02:41.675 SO libspdk_bdev_iscsi.so.6.0 00:02:41.675 SO libspdk_bdev_raid.so.6.0 00:02:41.675 SYMLINK libspdk_bdev_iscsi.so 00:02:41.675 SYMLINK libspdk_bdev_raid.so 00:02:41.933 LIB libspdk_bdev_virtio.a 00:02:42.230 SO libspdk_bdev_virtio.so.6.0 00:02:42.230 SYMLINK libspdk_bdev_virtio.so 00:02:42.488 LIB libspdk_bdev_nvme.a 00:02:42.488 SO libspdk_bdev_nvme.so.7.0 00:02:42.747 SYMLINK libspdk_bdev_nvme.so 00:02:43.315 CC module/event/subsystems/scheduler/scheduler.o 00:02:43.315 CC module/event/subsystems/iobuf/iobuf.o 00:02:43.315 CC module/event/subsystems/sock/sock.o 00:02:43.315 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:43.315 CC module/event/subsystems/vmd/vmd.o 00:02:43.315 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:43.315 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:43.315 CC module/event/subsystems/keyring/keyring.o 00:02:43.315 LIB libspdk_event_sock.a 00:02:43.315 LIB libspdk_event_vhost_blk.a 00:02:43.315 LIB libspdk_event_keyring.a 00:02:43.315 LIB libspdk_event_scheduler.a 00:02:43.315 SO libspdk_event_sock.so.5.0 00:02:43.315 SO libspdk_event_vhost_blk.so.3.0 00:02:43.315 LIB libspdk_event_vmd.a 00:02:43.315 LIB libspdk_event_iobuf.a 00:02:43.315 SO libspdk_event_scheduler.so.4.0 00:02:43.315 SO libspdk_event_keyring.so.1.0 00:02:43.315 SO libspdk_event_vmd.so.6.0 00:02:43.315 SYMLINK libspdk_event_sock.so 00:02:43.315 SO libspdk_event_iobuf.so.3.0 00:02:43.315 SYMLINK libspdk_event_keyring.so 00:02:43.315 SYMLINK libspdk_event_vhost_blk.so 00:02:43.315 SYMLINK libspdk_event_scheduler.so 00:02:43.573 SYMLINK libspdk_event_vmd.so 00:02:43.573 SYMLINK libspdk_event_iobuf.so 00:02:43.831 CC module/event/subsystems/accel/accel.o 00:02:43.831 LIB libspdk_event_accel.a 00:02:43.831 SO libspdk_event_accel.so.6.0 00:02:44.090 SYMLINK libspdk_event_accel.so 00:02:44.348 CC module/event/subsystems/bdev/bdev.o 00:02:44.608 LIB libspdk_event_bdev.a 00:02:44.608 SO libspdk_event_bdev.so.6.0 00:02:44.608 SYMLINK libspdk_event_bdev.so 00:02:44.866 CC module/event/subsystems/scsi/scsi.o 00:02:44.866 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:44.866 CC module/event/subsystems/ublk/ublk.o 00:02:44.866 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:44.866 CC module/event/subsystems/nbd/nbd.o 00:02:44.866 LIB libspdk_event_ublk.a 00:02:44.866 LIB libspdk_event_scsi.a 00:02:44.866 LIB libspdk_event_nbd.a 00:02:45.124 SO libspdk_event_scsi.so.6.0 00:02:45.124 SO libspdk_event_nbd.so.6.0 00:02:45.124 SO libspdk_event_ublk.so.3.0 00:02:45.124 SYMLINK libspdk_event_nbd.so 00:02:45.124 SYMLINK libspdk_event_ublk.so 00:02:45.124 SYMLINK libspdk_event_scsi.so 00:02:45.124 LIB libspdk_event_nvmf.a 00:02:45.124 SO libspdk_event_nvmf.so.6.0 00:02:45.382 SYMLINK libspdk_event_nvmf.so 00:02:45.382 CC module/event/subsystems/iscsi/iscsi.o 00:02:45.382 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:45.640 LIB libspdk_event_vhost_scsi.a 00:02:45.640 LIB libspdk_event_iscsi.a 00:02:45.640 SO libspdk_event_vhost_scsi.so.3.0 00:02:45.640 SO libspdk_event_iscsi.so.6.0 00:02:45.640 SYMLINK libspdk_event_vhost_scsi.so 00:02:45.640 SYMLINK libspdk_event_iscsi.so 00:02:45.898 SO libspdk.so.6.0 00:02:45.898 SYMLINK libspdk.so 00:02:46.157 TEST_HEADER include/spdk/accel.h 00:02:46.157 TEST_HEADER include/spdk/accel_module.h 00:02:46.157 CXX app/trace/trace.o 00:02:46.157 TEST_HEADER include/spdk/assert.h 00:02:46.157 TEST_HEADER include/spdk/barrier.h 00:02:46.157 TEST_HEADER include/spdk/base64.h 00:02:46.157 TEST_HEADER include/spdk/bdev.h 00:02:46.157 TEST_HEADER include/spdk/bdev_module.h 00:02:46.157 TEST_HEADER include/spdk/bdev_zone.h 00:02:46.157 TEST_HEADER include/spdk/bit_array.h 00:02:46.157 TEST_HEADER include/spdk/bit_pool.h 00:02:46.157 TEST_HEADER include/spdk/blob_bdev.h 00:02:46.157 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:46.157 TEST_HEADER include/spdk/blobfs.h 00:02:46.157 TEST_HEADER include/spdk/blob.h 00:02:46.157 TEST_HEADER include/spdk/conf.h 00:02:46.157 TEST_HEADER include/spdk/config.h 00:02:46.157 TEST_HEADER include/spdk/cpuset.h 00:02:46.157 TEST_HEADER include/spdk/crc16.h 00:02:46.157 TEST_HEADER include/spdk/crc32.h 00:02:46.157 TEST_HEADER include/spdk/crc64.h 00:02:46.157 TEST_HEADER include/spdk/dif.h 00:02:46.157 TEST_HEADER include/spdk/dma.h 00:02:46.157 TEST_HEADER include/spdk/endian.h 00:02:46.157 TEST_HEADER include/spdk/env_dpdk.h 00:02:46.157 TEST_HEADER include/spdk/env.h 00:02:46.157 TEST_HEADER include/spdk/event.h 00:02:46.157 TEST_HEADER include/spdk/fd_group.h 00:02:46.157 TEST_HEADER include/spdk/fd.h 00:02:46.157 TEST_HEADER include/spdk/file.h 00:02:46.157 TEST_HEADER include/spdk/ftl.h 00:02:46.157 TEST_HEADER include/spdk/gpt_spec.h 00:02:46.157 TEST_HEADER include/spdk/hexlify.h 00:02:46.157 TEST_HEADER include/spdk/histogram_data.h 00:02:46.157 TEST_HEADER include/spdk/idxd.h 00:02:46.157 TEST_HEADER include/spdk/idxd_spec.h 00:02:46.157 CC examples/accel/perf/accel_perf.o 00:02:46.157 TEST_HEADER include/spdk/init.h 00:02:46.157 TEST_HEADER include/spdk/ioat.h 00:02:46.157 TEST_HEADER include/spdk/ioat_spec.h 00:02:46.157 TEST_HEADER include/spdk/iscsi_spec.h 00:02:46.157 TEST_HEADER include/spdk/json.h 00:02:46.157 TEST_HEADER include/spdk/jsonrpc.h 00:02:46.157 TEST_HEADER include/spdk/keyring.h 00:02:46.157 TEST_HEADER include/spdk/keyring_module.h 00:02:46.157 TEST_HEADER include/spdk/likely.h 00:02:46.157 TEST_HEADER include/spdk/log.h 00:02:46.157 CC test/blobfs/mkfs/mkfs.o 00:02:46.157 TEST_HEADER include/spdk/lvol.h 00:02:46.157 TEST_HEADER include/spdk/memory.h 00:02:46.157 CC test/dma/test_dma/test_dma.o 00:02:46.157 TEST_HEADER include/spdk/mmio.h 00:02:46.157 TEST_HEADER include/spdk/nbd.h 00:02:46.157 TEST_HEADER include/spdk/notify.h 00:02:46.157 CC test/app/bdev_svc/bdev_svc.o 00:02:46.157 CC test/bdev/bdevio/bdevio.o 00:02:46.157 TEST_HEADER include/spdk/nvme.h 00:02:46.157 TEST_HEADER include/spdk/nvme_intel.h 00:02:46.157 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:46.157 CC test/accel/dif/dif.o 00:02:46.157 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:46.157 TEST_HEADER include/spdk/nvme_spec.h 00:02:46.157 TEST_HEADER include/spdk/nvme_zns.h 00:02:46.157 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:46.416 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:46.416 TEST_HEADER include/spdk/nvmf.h 00:02:46.416 TEST_HEADER include/spdk/nvmf_spec.h 00:02:46.416 TEST_HEADER include/spdk/nvmf_transport.h 00:02:46.416 TEST_HEADER include/spdk/opal.h 00:02:46.416 TEST_HEADER include/spdk/opal_spec.h 00:02:46.416 TEST_HEADER include/spdk/pci_ids.h 00:02:46.416 TEST_HEADER include/spdk/pipe.h 00:02:46.416 TEST_HEADER include/spdk/queue.h 00:02:46.416 TEST_HEADER include/spdk/reduce.h 00:02:46.416 CC examples/bdev/hello_world/hello_bdev.o 00:02:46.416 TEST_HEADER include/spdk/rpc.h 00:02:46.416 TEST_HEADER include/spdk/scheduler.h 00:02:46.416 TEST_HEADER include/spdk/scsi.h 00:02:46.416 TEST_HEADER include/spdk/scsi_spec.h 00:02:46.416 TEST_HEADER include/spdk/sock.h 00:02:46.416 TEST_HEADER include/spdk/stdinc.h 00:02:46.416 TEST_HEADER include/spdk/string.h 00:02:46.416 TEST_HEADER include/spdk/thread.h 00:02:46.416 CC test/env/mem_callbacks/mem_callbacks.o 00:02:46.416 TEST_HEADER include/spdk/trace.h 00:02:46.416 TEST_HEADER include/spdk/trace_parser.h 00:02:46.416 TEST_HEADER include/spdk/tree.h 00:02:46.416 TEST_HEADER include/spdk/ublk.h 00:02:46.416 TEST_HEADER include/spdk/util.h 00:02:46.416 TEST_HEADER include/spdk/uuid.h 00:02:46.416 TEST_HEADER include/spdk/version.h 00:02:46.416 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:46.416 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:46.416 TEST_HEADER include/spdk/vhost.h 00:02:46.416 TEST_HEADER include/spdk/vmd.h 00:02:46.416 TEST_HEADER include/spdk/xor.h 00:02:46.416 TEST_HEADER include/spdk/zipf.h 00:02:46.416 CXX test/cpp_headers/accel.o 00:02:46.416 LINK bdev_svc 00:02:46.416 LINK mkfs 00:02:46.674 LINK spdk_trace 00:02:46.674 CXX test/cpp_headers/accel_module.o 00:02:46.674 LINK test_dma 00:02:46.674 LINK dif 00:02:46.674 LINK accel_perf 00:02:46.674 LINK bdevio 00:02:46.674 LINK hello_bdev 00:02:46.932 CXX test/cpp_headers/assert.o 00:02:46.932 CC test/app/histogram_perf/histogram_perf.o 00:02:46.932 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:46.932 CC app/trace_record/trace_record.o 00:02:46.932 CXX test/cpp_headers/barrier.o 00:02:46.932 LINK histogram_perf 00:02:47.202 LINK mem_callbacks 00:02:47.202 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:47.202 CC test/app/jsoncat/jsoncat.o 00:02:47.202 CC test/app/stub/stub.o 00:02:47.202 CC examples/bdev/bdevperf/bdevperf.o 00:02:47.202 CXX test/cpp_headers/base64.o 00:02:47.202 CC test/event/event_perf/event_perf.o 00:02:47.202 LINK spdk_trace_record 00:02:47.202 LINK jsoncat 00:02:47.202 LINK stub 00:02:47.202 CC test/env/vtophys/vtophys.o 00:02:47.202 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:47.501 LINK nvme_fuzz 00:02:47.501 LINK event_perf 00:02:47.501 CXX test/cpp_headers/bdev.o 00:02:47.501 LINK vtophys 00:02:47.501 LINK env_dpdk_post_init 00:02:47.501 CC test/env/memory/memory_ut.o 00:02:47.501 CXX test/cpp_headers/bdev_module.o 00:02:47.501 CC app/nvmf_tgt/nvmf_main.o 00:02:47.501 CC test/env/pci/pci_ut.o 00:02:47.760 CC test/event/reactor/reactor.o 00:02:47.760 CC test/event/reactor_perf/reactor_perf.o 00:02:47.760 CC test/event/app_repeat/app_repeat.o 00:02:47.760 LINK nvmf_tgt 00:02:47.760 LINK reactor 00:02:47.760 LINK reactor_perf 00:02:47.760 CXX test/cpp_headers/bdev_zone.o 00:02:47.760 CC test/event/scheduler/scheduler.o 00:02:48.018 LINK bdevperf 00:02:48.018 LINK app_repeat 00:02:48.018 CXX test/cpp_headers/bit_array.o 00:02:48.018 LINK pci_ut 00:02:48.018 CXX test/cpp_headers/bit_pool.o 00:02:48.018 LINK scheduler 00:02:48.018 CC app/iscsi_tgt/iscsi_tgt.o 00:02:48.277 CC app/spdk_tgt/spdk_tgt.o 00:02:48.277 CXX test/cpp_headers/blob_bdev.o 00:02:48.277 CC app/spdk_lspci/spdk_lspci.o 00:02:48.277 LINK iscsi_tgt 00:02:48.277 CC examples/blob/hello_world/hello_blob.o 00:02:48.535 CC app/spdk_nvme_perf/perf.o 00:02:48.535 CXX test/cpp_headers/blobfs_bdev.o 00:02:48.535 LINK spdk_tgt 00:02:48.535 CC app/spdk_nvme_identify/identify.o 00:02:48.535 LINK spdk_lspci 00:02:48.535 CC test/lvol/esnap/esnap.o 00:02:48.535 LINK memory_ut 00:02:48.535 LINK hello_blob 00:02:48.794 CXX test/cpp_headers/blobfs.o 00:02:48.794 CC examples/blob/cli/blobcli.o 00:02:48.794 CC app/spdk_nvme_discover/discovery_aer.o 00:02:48.794 LINK iscsi_fuzz 00:02:48.794 CC app/spdk_top/spdk_top.o 00:02:48.794 CC app/vhost/vhost.o 00:02:48.794 CXX test/cpp_headers/blob.o 00:02:49.052 LINK spdk_nvme_discover 00:02:49.052 CXX test/cpp_headers/conf.o 00:02:49.052 CC app/spdk_dd/spdk_dd.o 00:02:49.052 LINK vhost 00:02:49.310 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:49.310 CXX test/cpp_headers/config.o 00:02:49.310 LINK blobcli 00:02:49.310 CXX test/cpp_headers/cpuset.o 00:02:49.310 LINK spdk_nvme_perf 00:02:49.310 LINK spdk_nvme_identify 00:02:49.310 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:49.310 CC app/fio/nvme/fio_plugin.o 00:02:49.310 CXX test/cpp_headers/crc16.o 00:02:49.568 CC examples/ioat/perf/perf.o 00:02:49.568 LINK spdk_dd 00:02:49.568 CC examples/ioat/verify/verify.o 00:02:49.568 CXX test/cpp_headers/crc32.o 00:02:49.568 CC examples/nvme/hello_world/hello_world.o 00:02:49.826 LINK spdk_top 00:02:49.826 LINK ioat_perf 00:02:49.826 CC examples/sock/hello_world/hello_sock.o 00:02:49.826 LINK verify 00:02:49.826 LINK vhost_fuzz 00:02:49.826 CC examples/nvme/reconnect/reconnect.o 00:02:49.826 CXX test/cpp_headers/crc64.o 00:02:49.826 CXX test/cpp_headers/dif.o 00:02:49.826 CXX test/cpp_headers/dma.o 00:02:49.826 LINK spdk_nvme 00:02:50.084 LINK hello_sock 00:02:50.084 CXX test/cpp_headers/endian.o 00:02:50.084 LINK hello_world 00:02:50.084 CXX test/cpp_headers/env_dpdk.o 00:02:50.084 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:50.084 CC examples/nvme/arbitration/arbitration.o 00:02:50.084 LINK reconnect 00:02:50.340 CC app/fio/bdev/fio_plugin.o 00:02:50.340 CC examples/nvme/hotplug/hotplug.o 00:02:50.340 CC test/nvme/aer/aer.o 00:02:50.340 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:50.340 CC examples/nvme/abort/abort.o 00:02:50.340 CXX test/cpp_headers/env.o 00:02:50.598 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:50.598 LINK cmb_copy 00:02:50.598 LINK hotplug 00:02:50.598 LINK arbitration 00:02:50.598 CXX test/cpp_headers/event.o 00:02:50.598 LINK aer 00:02:50.598 CXX test/cpp_headers/fd_group.o 00:02:50.598 LINK pmr_persistence 00:02:50.598 LINK nvme_manage 00:02:50.856 CXX test/cpp_headers/fd.o 00:02:50.856 LINK abort 00:02:50.856 LINK spdk_bdev 00:02:50.856 CC test/nvme/reset/reset.o 00:02:50.856 CC test/nvme/sgl/sgl.o 00:02:50.856 CC test/nvme/e2edp/nvme_dp.o 00:02:50.856 CXX test/cpp_headers/file.o 00:02:50.856 CC test/nvme/overhead/overhead.o 00:02:51.114 CC test/nvme/err_injection/err_injection.o 00:02:51.114 CC test/nvme/reserve/reserve.o 00:02:51.114 CC test/nvme/startup/startup.o 00:02:51.114 CXX test/cpp_headers/ftl.o 00:02:51.114 LINK reset 00:02:51.114 CC examples/vmd/lsvmd/lsvmd.o 00:02:51.114 LINK sgl 00:02:51.114 LINK nvme_dp 00:02:51.114 LINK err_injection 00:02:51.114 LINK startup 00:02:51.114 LINK reserve 00:02:51.114 LINK overhead 00:02:51.372 LINK lsvmd 00:02:51.372 CXX test/cpp_headers/gpt_spec.o 00:02:51.372 CC test/nvme/simple_copy/simple_copy.o 00:02:51.372 CXX test/cpp_headers/hexlify.o 00:02:51.372 CXX test/cpp_headers/histogram_data.o 00:02:51.372 CXX test/cpp_headers/idxd.o 00:02:51.372 CC examples/vmd/led/led.o 00:02:51.372 CXX test/cpp_headers/idxd_spec.o 00:02:51.634 CC test/nvme/connect_stress/connect_stress.o 00:02:51.634 CC test/rpc_client/rpc_client_test.o 00:02:51.634 CC examples/nvmf/nvmf/nvmf.o 00:02:51.634 LINK led 00:02:51.634 LINK simple_copy 00:02:51.634 CXX test/cpp_headers/init.o 00:02:51.634 LINK connect_stress 00:02:51.892 LINK rpc_client_test 00:02:51.893 CC examples/util/zipf/zipf.o 00:02:51.893 CC examples/idxd/perf/perf.o 00:02:51.893 CC examples/thread/thread/thread_ex.o 00:02:51.893 CXX test/cpp_headers/ioat.o 00:02:51.893 LINK nvmf 00:02:51.893 CC test/nvme/boot_partition/boot_partition.o 00:02:51.893 CC test/nvme/compliance/nvme_compliance.o 00:02:51.893 LINK zipf 00:02:51.893 CXX test/cpp_headers/ioat_spec.o 00:02:52.150 CXX test/cpp_headers/iscsi_spec.o 00:02:52.150 CC test/thread/poller_perf/poller_perf.o 00:02:52.150 LINK thread 00:02:52.150 LINK boot_partition 00:02:52.150 CXX test/cpp_headers/json.o 00:02:52.150 CXX test/cpp_headers/jsonrpc.o 00:02:52.150 LINK idxd_perf 00:02:52.150 CXX test/cpp_headers/keyring.o 00:02:52.408 LINK nvme_compliance 00:02:52.408 LINK poller_perf 00:02:52.408 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:52.408 CXX test/cpp_headers/keyring_module.o 00:02:52.408 CXX test/cpp_headers/likely.o 00:02:52.408 CXX test/cpp_headers/log.o 00:02:52.408 CXX test/cpp_headers/lvol.o 00:02:52.408 CC test/nvme/fused_ordering/fused_ordering.o 00:02:52.408 CXX test/cpp_headers/memory.o 00:02:52.408 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:52.408 LINK interrupt_tgt 00:02:52.666 CXX test/cpp_headers/mmio.o 00:02:52.666 CXX test/cpp_headers/nbd.o 00:02:52.666 CC test/nvme/fdp/fdp.o 00:02:52.666 CXX test/cpp_headers/notify.o 00:02:52.666 CXX test/cpp_headers/nvme.o 00:02:52.666 CC test/nvme/cuse/cuse.o 00:02:52.666 CXX test/cpp_headers/nvme_intel.o 00:02:52.666 LINK doorbell_aers 00:02:52.666 LINK fused_ordering 00:02:52.666 CXX test/cpp_headers/nvme_ocssd.o 00:02:52.666 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:52.666 CXX test/cpp_headers/nvme_spec.o 00:02:52.924 CXX test/cpp_headers/nvme_zns.o 00:02:52.924 CXX test/cpp_headers/nvmf_cmd.o 00:02:52.924 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:52.924 CXX test/cpp_headers/nvmf.o 00:02:52.924 LINK fdp 00:02:52.924 CXX test/cpp_headers/nvmf_spec.o 00:02:52.924 CXX test/cpp_headers/nvmf_transport.o 00:02:52.924 CXX test/cpp_headers/opal.o 00:02:52.924 CXX test/cpp_headers/opal_spec.o 00:02:53.182 CXX test/cpp_headers/pci_ids.o 00:02:53.182 CXX test/cpp_headers/pipe.o 00:02:53.182 CXX test/cpp_headers/queue.o 00:02:53.182 CXX test/cpp_headers/reduce.o 00:02:53.182 CXX test/cpp_headers/rpc.o 00:02:53.182 CXX test/cpp_headers/scheduler.o 00:02:53.182 CXX test/cpp_headers/scsi.o 00:02:53.182 CXX test/cpp_headers/scsi_spec.o 00:02:53.182 CXX test/cpp_headers/sock.o 00:02:53.182 CXX test/cpp_headers/stdinc.o 00:02:53.182 CXX test/cpp_headers/string.o 00:02:53.182 LINK esnap 00:02:53.182 CXX test/cpp_headers/thread.o 00:02:53.182 CXX test/cpp_headers/trace.o 00:02:53.440 CXX test/cpp_headers/trace_parser.o 00:02:53.440 CXX test/cpp_headers/tree.o 00:02:53.440 CXX test/cpp_headers/ublk.o 00:02:53.440 CXX test/cpp_headers/util.o 00:02:53.440 CXX test/cpp_headers/uuid.o 00:02:53.440 CXX test/cpp_headers/version.o 00:02:53.440 CXX test/cpp_headers/vfio_user_pci.o 00:02:53.440 CXX test/cpp_headers/vfio_user_spec.o 00:02:53.440 CXX test/cpp_headers/vhost.o 00:02:53.440 CXX test/cpp_headers/vmd.o 00:02:53.440 CXX test/cpp_headers/xor.o 00:02:53.699 CXX test/cpp_headers/zipf.o 00:02:53.699 LINK cuse 00:02:53.957 00:02:53.957 real 1m2.759s 00:02:53.957 user 6m29.041s 00:02:53.957 sys 1m36.289s 00:02:53.957 15:25:55 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:02:53.957 15:25:55 -- common/autotest_common.sh@10 -- $ set +x 00:02:53.957 ************************************ 00:02:53.957 END TEST make 00:02:53.957 ************************************ 00:02:53.957 15:25:55 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:53.957 15:25:55 -- pm/common@30 -- $ signal_monitor_resources TERM 00:02:53.957 15:25:55 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:02:53.957 15:25:55 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:53.957 15:25:55 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:02:53.957 15:25:55 -- pm/common@45 -- $ pid=5148 00:02:53.957 15:25:55 -- pm/common@52 -- $ sudo kill -TERM 5148 00:02:53.957 15:25:55 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:53.957 15:25:55 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:02:54.216 15:25:55 -- pm/common@45 -- $ pid=5147 00:02:54.216 15:25:55 -- pm/common@52 -- $ sudo kill -TERM 5147 00:02:54.216 15:25:55 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:02:54.216 15:25:55 -- nvmf/common.sh@7 -- # uname -s 00:02:54.216 15:25:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:54.216 15:25:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:54.216 15:25:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:54.216 15:25:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:54.216 15:25:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:54.216 15:25:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:54.216 15:25:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:54.216 15:25:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:54.216 15:25:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:54.216 15:25:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:54.216 15:25:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02dfa913-00e4-4a25-ab2c-855f7283d4db 00:02:54.216 15:25:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=02dfa913-00e4-4a25-ab2c-855f7283d4db 00:02:54.216 15:25:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:54.216 15:25:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:54.216 15:25:55 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:02:54.216 15:25:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:54.216 15:25:55 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:54.216 15:25:55 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:54.216 15:25:55 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:54.216 15:25:55 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:54.216 15:25:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:54.216 15:25:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:54.216 15:25:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:54.216 15:25:55 -- paths/export.sh@5 -- # export PATH 00:02:54.216 15:25:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:54.216 15:25:55 -- nvmf/common.sh@47 -- # : 0 00:02:54.216 15:25:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:54.216 15:25:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:54.216 15:25:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:54.216 15:25:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:54.216 15:25:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:54.216 15:25:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:54.216 15:25:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:54.216 15:25:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:54.216 15:25:55 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:54.216 15:25:55 -- spdk/autotest.sh@32 -- # uname -s 00:02:54.216 15:25:55 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:54.216 15:25:55 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:54.216 15:25:55 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:54.216 15:25:55 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:02:54.216 15:25:55 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:54.216 15:25:55 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:54.216 15:25:55 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:54.216 15:25:55 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:54.216 15:25:55 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:54.216 15:25:55 -- spdk/autotest.sh@48 -- # udevadm_pid=52137 00:02:54.216 15:25:55 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:54.216 15:25:55 -- pm/common@17 -- # local monitor 00:02:54.216 15:25:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:54.216 15:25:55 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=52138 00:02:54.216 15:25:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:54.216 15:25:55 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=52140 00:02:54.216 15:25:55 -- pm/common@26 -- # sleep 1 00:02:54.216 15:25:55 -- pm/common@21 -- # date +%s 00:02:54.216 15:25:55 -- pm/common@21 -- # date +%s 00:02:54.216 15:25:55 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1713367555 00:02:54.216 15:25:55 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1713367555 00:02:54.473 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1713367555_collect-vmstat.pm.log 00:02:54.473 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1713367555_collect-cpu-load.pm.log 00:02:55.406 15:25:56 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:55.406 15:25:56 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:55.406 15:25:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:02:55.406 15:25:56 -- common/autotest_common.sh@10 -- # set +x 00:02:55.406 15:25:56 -- spdk/autotest.sh@59 -- # create_test_list 00:02:55.406 15:25:56 -- common/autotest_common.sh@734 -- # xtrace_disable 00:02:55.406 15:25:56 -- common/autotest_common.sh@10 -- # set +x 00:02:55.406 15:25:56 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:02:55.406 15:25:56 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:02:55.406 15:25:56 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:02:55.406 15:25:56 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:02:55.406 15:25:56 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:02:55.406 15:25:56 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:55.406 15:25:56 -- common/autotest_common.sh@1441 -- # uname 00:02:55.406 15:25:56 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:02:55.406 15:25:56 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:55.406 15:25:56 -- common/autotest_common.sh@1461 -- # uname 00:02:55.406 15:25:56 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:02:55.406 15:25:56 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:55.406 15:25:56 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:55.406 15:25:56 -- spdk/autotest.sh@72 -- # hash lcov 00:02:55.406 15:25:56 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:55.406 15:25:56 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:55.406 --rc lcov_branch_coverage=1 00:02:55.406 --rc lcov_function_coverage=1 00:02:55.406 --rc genhtml_branch_coverage=1 00:02:55.406 --rc genhtml_function_coverage=1 00:02:55.406 --rc genhtml_legend=1 00:02:55.406 --rc geninfo_all_blocks=1 00:02:55.406 ' 00:02:55.406 15:25:56 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:55.406 --rc lcov_branch_coverage=1 00:02:55.406 --rc lcov_function_coverage=1 00:02:55.406 --rc genhtml_branch_coverage=1 00:02:55.406 --rc genhtml_function_coverage=1 00:02:55.406 --rc genhtml_legend=1 00:02:55.406 --rc geninfo_all_blocks=1 00:02:55.406 ' 00:02:55.406 15:25:56 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:55.406 --rc lcov_branch_coverage=1 00:02:55.406 --rc lcov_function_coverage=1 00:02:55.406 --rc genhtml_branch_coverage=1 00:02:55.406 --rc genhtml_function_coverage=1 00:02:55.406 --rc genhtml_legend=1 00:02:55.406 --rc geninfo_all_blocks=1 00:02:55.406 --no-external' 00:02:55.406 15:25:56 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:55.406 --rc lcov_branch_coverage=1 00:02:55.406 --rc lcov_function_coverage=1 00:02:55.406 --rc genhtml_branch_coverage=1 00:02:55.406 --rc genhtml_function_coverage=1 00:02:55.406 --rc genhtml_legend=1 00:02:55.406 --rc geninfo_all_blocks=1 00:02:55.406 --no-external' 00:02:55.406 15:25:56 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:55.406 lcov: LCOV version 1.14 00:02:55.406 15:25:56 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:03.523 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:03.523 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:03.523 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:03.523 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:03.523 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:03.523 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:10.087 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:10.087 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:22.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:22.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:22.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:22.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:22.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:22.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:22.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:22.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:22.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:22.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:22.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:22.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:22.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:22.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:22.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:22.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:22.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:22.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:22.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:22.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:22.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:22.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:22.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:22.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:22.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:22.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:22.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:22.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:22.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:22.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:22.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:22.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:22.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:22.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:22.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:22.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:22.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:22.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:22.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:22.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:22.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:22.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:22.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:22.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:22.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:22.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:22.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:22.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:22.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:22.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:22.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:22.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:22.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:22.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:22.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:22.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:22.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:22.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:22.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:22.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:22.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:22.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:22.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:22.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:22.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:22.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:22.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:22.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:22.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:22.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:22.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:22.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:22.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:22.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:22.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:22.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:22.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:22.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:22.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:22.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:22.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:22.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:22.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:22.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:03:22.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:22.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:03:22.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:22.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:22.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:22.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:22.831 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:22.831 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:22.831 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:22.831 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:22.831 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:22.831 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:22.831 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:22.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:22.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:22.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:22.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:22.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:22.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:22.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:22.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:22.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:22.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:22.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:22.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:22.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:22.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:22.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:22.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:22.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:22.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:22.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:22.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:22.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:22.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:22.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:22.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:22.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:22.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:22.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:22.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:22.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:22.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:22.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:22.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:22.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:22.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:22.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:22.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:22.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:22.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:22.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:22.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:22.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:22.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:22.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:22.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:22.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:22.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:22.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:22.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:22.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:22.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:22.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:22.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:22.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:22.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:22.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:22.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:22.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:22.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:22.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:03:22.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:22.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:22.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:22.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:22.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:22.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:22.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:22.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:22.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:22.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:22.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:22.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:22.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:22.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:22.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:22.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:23.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:23.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:23.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:23.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:03:26.374 15:26:27 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:26.374 15:26:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:26.374 15:26:27 -- common/autotest_common.sh@10 -- # set +x 00:03:26.374 15:26:27 -- spdk/autotest.sh@91 -- # rm -f 00:03:26.374 15:26:27 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:27.309 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:27.309 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:27.309 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:27.309 15:26:28 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:27.309 15:26:28 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:27.309 15:26:28 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:27.309 15:26:28 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:27.309 15:26:28 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:27.309 15:26:28 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:27.309 15:26:28 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:27.309 15:26:28 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:27.309 15:26:28 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:27.309 15:26:28 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:27.309 15:26:28 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n2 00:03:27.309 15:26:28 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:03:27.309 15:26:28 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:03:27.309 15:26:28 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:27.309 15:26:28 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:27.309 15:26:28 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n3 00:03:27.309 15:26:28 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:03:27.309 15:26:28 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:03:27.309 15:26:28 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:27.309 15:26:28 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:27.309 15:26:28 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:03:27.309 15:26:28 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:03:27.309 15:26:28 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:27.309 15:26:28 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:27.309 15:26:28 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:27.309 15:26:28 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:27.309 15:26:28 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:27.309 15:26:28 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:27.309 15:26:28 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:27.309 15:26:28 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:27.309 No valid GPT data, bailing 00:03:27.309 15:26:28 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:27.309 15:26:28 -- scripts/common.sh@391 -- # pt= 00:03:27.309 15:26:28 -- scripts/common.sh@392 -- # return 1 00:03:27.309 15:26:28 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:27.309 1+0 records in 00:03:27.309 1+0 records out 00:03:27.309 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00527427 s, 199 MB/s 00:03:27.309 15:26:28 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:27.309 15:26:28 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:27.309 15:26:28 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n2 00:03:27.309 15:26:28 -- scripts/common.sh@378 -- # local block=/dev/nvme0n2 pt 00:03:27.310 15:26:28 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n2 00:03:27.310 No valid GPT data, bailing 00:03:27.310 15:26:28 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:03:27.310 15:26:28 -- scripts/common.sh@391 -- # pt= 00:03:27.310 15:26:28 -- scripts/common.sh@392 -- # return 1 00:03:27.310 15:26:28 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n2 bs=1M count=1 00:03:27.310 1+0 records in 00:03:27.310 1+0 records out 00:03:27.310 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00520887 s, 201 MB/s 00:03:27.310 15:26:28 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:27.310 15:26:28 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:27.310 15:26:28 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n3 00:03:27.310 15:26:28 -- scripts/common.sh@378 -- # local block=/dev/nvme0n3 pt 00:03:27.310 15:26:28 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n3 00:03:27.310 No valid GPT data, bailing 00:03:27.310 15:26:28 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:03:27.569 15:26:28 -- scripts/common.sh@391 -- # pt= 00:03:27.569 15:26:28 -- scripts/common.sh@392 -- # return 1 00:03:27.569 15:26:28 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n3 bs=1M count=1 00:03:27.569 1+0 records in 00:03:27.569 1+0 records out 00:03:27.569 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00464659 s, 226 MB/s 00:03:27.569 15:26:28 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:27.569 15:26:28 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:27.569 15:26:28 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:03:27.569 15:26:28 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:03:27.569 15:26:28 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:27.569 No valid GPT data, bailing 00:03:27.569 15:26:28 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:27.569 15:26:28 -- scripts/common.sh@391 -- # pt= 00:03:27.569 15:26:28 -- scripts/common.sh@392 -- # return 1 00:03:27.569 15:26:28 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:27.569 1+0 records in 00:03:27.569 1+0 records out 00:03:27.569 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00398208 s, 263 MB/s 00:03:27.569 15:26:28 -- spdk/autotest.sh@118 -- # sync 00:03:27.569 15:26:28 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:27.569 15:26:28 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:27.569 15:26:28 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:29.469 15:26:30 -- spdk/autotest.sh@124 -- # uname -s 00:03:29.469 15:26:30 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:29.469 15:26:30 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:29.469 15:26:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:29.469 15:26:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:29.469 15:26:30 -- common/autotest_common.sh@10 -- # set +x 00:03:29.469 ************************************ 00:03:29.469 START TEST setup.sh 00:03:29.469 ************************************ 00:03:29.469 15:26:30 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:29.469 * Looking for test storage... 00:03:29.469 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:29.469 15:26:30 -- setup/test-setup.sh@10 -- # uname -s 00:03:29.469 15:26:30 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:29.469 15:26:30 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:29.469 15:26:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:29.469 15:26:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:29.469 15:26:30 -- common/autotest_common.sh@10 -- # set +x 00:03:29.469 ************************************ 00:03:29.469 START TEST acl 00:03:29.469 ************************************ 00:03:29.469 15:26:30 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:29.728 * Looking for test storage... 00:03:29.728 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:29.728 15:26:30 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:29.728 15:26:30 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:29.728 15:26:30 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:29.728 15:26:30 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:29.728 15:26:30 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:29.728 15:26:30 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:29.728 15:26:30 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:29.728 15:26:30 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:29.728 15:26:30 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:29.728 15:26:30 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:29.728 15:26:30 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n2 00:03:29.728 15:26:30 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:03:29.728 15:26:30 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:03:29.728 15:26:30 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:29.728 15:26:30 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:29.728 15:26:30 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n3 00:03:29.728 15:26:30 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:03:29.728 15:26:30 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:03:29.728 15:26:30 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:29.728 15:26:30 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:29.728 15:26:30 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:03:29.728 15:26:30 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:03:29.728 15:26:30 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:29.728 15:26:30 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:29.728 15:26:30 -- setup/acl.sh@12 -- # devs=() 00:03:29.728 15:26:30 -- setup/acl.sh@12 -- # declare -a devs 00:03:29.728 15:26:30 -- setup/acl.sh@13 -- # drivers=() 00:03:29.728 15:26:30 -- setup/acl.sh@13 -- # declare -A drivers 00:03:29.728 15:26:30 -- setup/acl.sh@51 -- # setup reset 00:03:29.728 15:26:30 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:29.728 15:26:30 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:30.295 15:26:31 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:30.295 15:26:31 -- setup/acl.sh@16 -- # local dev driver 00:03:30.295 15:26:31 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:30.295 15:26:31 -- setup/acl.sh@15 -- # setup output status 00:03:30.295 15:26:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.295 15:26:31 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:31.231 15:26:32 -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:03:31.231 15:26:32 -- setup/acl.sh@19 -- # continue 00:03:31.231 15:26:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.231 Hugepages 00:03:31.231 node hugesize free / total 00:03:31.231 15:26:32 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:31.231 15:26:32 -- setup/acl.sh@19 -- # continue 00:03:31.231 15:26:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.231 00:03:31.231 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:31.231 15:26:32 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:31.231 15:26:32 -- setup/acl.sh@19 -- # continue 00:03:31.231 15:26:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.231 15:26:32 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:31.231 15:26:32 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:31.231 15:26:32 -- setup/acl.sh@20 -- # continue 00:03:31.231 15:26:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.231 15:26:32 -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:03:31.231 15:26:32 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:31.231 15:26:32 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:31.231 15:26:32 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:31.231 15:26:32 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:31.231 15:26:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.231 15:26:32 -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:03:31.231 15:26:32 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:31.231 15:26:32 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:31.231 15:26:32 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:31.231 15:26:32 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:31.231 15:26:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.231 15:26:32 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:03:31.231 15:26:32 -- setup/acl.sh@54 -- # run_test denied denied 00:03:31.231 15:26:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:31.231 15:26:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:31.231 15:26:32 -- common/autotest_common.sh@10 -- # set +x 00:03:31.489 ************************************ 00:03:31.489 START TEST denied 00:03:31.489 ************************************ 00:03:31.489 15:26:32 -- common/autotest_common.sh@1111 -- # denied 00:03:31.489 15:26:32 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:03:31.489 15:26:32 -- setup/acl.sh@38 -- # setup output config 00:03:31.489 15:26:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.489 15:26:32 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:31.489 15:26:32 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:03:32.425 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:03:32.425 15:26:33 -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:03:32.425 15:26:33 -- setup/acl.sh@28 -- # local dev driver 00:03:32.425 15:26:33 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:32.425 15:26:33 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:03:32.425 15:26:33 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:03:32.425 15:26:33 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:32.425 15:26:33 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:32.425 15:26:33 -- setup/acl.sh@41 -- # setup reset 00:03:32.425 15:26:33 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:32.425 15:26:33 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:32.994 ************************************ 00:03:32.994 END TEST denied 00:03:32.994 ************************************ 00:03:32.994 00:03:32.994 real 0m1.517s 00:03:32.994 user 0m0.582s 00:03:32.994 sys 0m0.871s 00:03:32.994 15:26:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:32.994 15:26:34 -- common/autotest_common.sh@10 -- # set +x 00:03:32.994 15:26:34 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:32.994 15:26:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:32.994 15:26:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:32.994 15:26:34 -- common/autotest_common.sh@10 -- # set +x 00:03:32.994 ************************************ 00:03:32.994 START TEST allowed 00:03:32.994 ************************************ 00:03:32.994 15:26:34 -- common/autotest_common.sh@1111 -- # allowed 00:03:32.994 15:26:34 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:03:32.994 15:26:34 -- setup/acl.sh@45 -- # setup output config 00:03:32.994 15:26:34 -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:03:32.994 15:26:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:32.994 15:26:34 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:34.038 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:34.038 15:26:35 -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:03:34.038 15:26:35 -- setup/acl.sh@28 -- # local dev driver 00:03:34.038 15:26:35 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:34.038 15:26:35 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:03:34.038 15:26:35 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:03:34.038 15:26:35 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:34.038 15:26:35 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:34.038 15:26:35 -- setup/acl.sh@48 -- # setup reset 00:03:34.038 15:26:35 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:34.038 15:26:35 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:34.606 00:03:34.606 real 0m1.557s 00:03:34.606 user 0m0.653s 00:03:34.606 sys 0m0.901s 00:03:34.606 15:26:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:34.606 ************************************ 00:03:34.606 END TEST allowed 00:03:34.606 ************************************ 00:03:34.606 15:26:35 -- common/autotest_common.sh@10 -- # set +x 00:03:34.606 00:03:34.606 real 0m5.128s 00:03:34.606 user 0m2.200s 00:03:34.606 sys 0m2.852s 00:03:34.606 15:26:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:34.606 15:26:35 -- common/autotest_common.sh@10 -- # set +x 00:03:34.606 ************************************ 00:03:34.606 END TEST acl 00:03:34.606 ************************************ 00:03:34.606 15:26:36 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:34.606 15:26:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:34.606 15:26:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:34.606 15:26:36 -- common/autotest_common.sh@10 -- # set +x 00:03:34.865 ************************************ 00:03:34.865 START TEST hugepages 00:03:34.865 ************************************ 00:03:34.865 15:26:36 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:34.865 * Looking for test storage... 00:03:34.865 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:34.865 15:26:36 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:34.865 15:26:36 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:34.865 15:26:36 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:34.865 15:26:36 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:34.865 15:26:36 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:34.865 15:26:36 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:34.865 15:26:36 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:34.865 15:26:36 -- setup/common.sh@18 -- # local node= 00:03:34.865 15:26:36 -- setup/common.sh@19 -- # local var val 00:03:34.865 15:26:36 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.865 15:26:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.865 15:26:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.865 15:26:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.865 15:26:36 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.865 15:26:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.865 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.865 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.866 15:26:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 5599984 kB' 'MemAvailable: 7385084 kB' 'Buffers: 2436 kB' 'Cached: 1997968 kB' 'SwapCached: 0 kB' 'Active: 834716 kB' 'Inactive: 1271908 kB' 'Active(anon): 116708 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1271908 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 680 kB' 'Writeback: 0 kB' 'AnonPages: 108160 kB' 'Mapped: 48868 kB' 'Shmem: 10488 kB' 'KReclaimable: 64696 kB' 'Slab: 138608 kB' 'SReclaimable: 64696 kB' 'SUnreclaim: 73912 kB' 'KernelStack: 6236 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412432 kB' 'Committed_AS: 339136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54516 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.866 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.866 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.867 15:26:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.867 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.867 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.867 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.867 15:26:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.867 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.867 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.867 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.867 15:26:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.867 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.867 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.867 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.867 15:26:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.867 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.867 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.867 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.867 15:26:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.867 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.867 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.867 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.867 15:26:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.867 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.867 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.867 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.867 15:26:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.867 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.867 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.867 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.867 15:26:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.867 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.867 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.867 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.867 15:26:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.867 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.867 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.867 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.867 15:26:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.867 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.867 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.867 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.867 15:26:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.867 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.867 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.867 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.867 15:26:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.867 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.867 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.867 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.867 15:26:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.867 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.867 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.867 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.867 15:26:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.867 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.867 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.867 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.867 15:26:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.867 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.867 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.867 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.867 15:26:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.867 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.867 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.867 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.867 15:26:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.867 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.867 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.867 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.867 15:26:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.867 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.867 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.867 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.867 15:26:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.867 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.867 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.867 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.867 15:26:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.867 15:26:36 -- setup/common.sh@32 -- # continue 00:03:34.867 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.867 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.867 15:26:36 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.867 15:26:36 -- setup/common.sh@33 -- # echo 2048 00:03:34.867 15:26:36 -- setup/common.sh@33 -- # return 0 00:03:34.867 15:26:36 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:34.867 15:26:36 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:34.867 15:26:36 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:34.867 15:26:36 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:34.867 15:26:36 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:34.867 15:26:36 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:34.867 15:26:36 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:34.867 15:26:36 -- setup/hugepages.sh@207 -- # get_nodes 00:03:34.867 15:26:36 -- setup/hugepages.sh@27 -- # local node 00:03:34.867 15:26:36 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.867 15:26:36 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:34.867 15:26:36 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:34.867 15:26:36 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:34.867 15:26:36 -- setup/hugepages.sh@208 -- # clear_hp 00:03:34.867 15:26:36 -- setup/hugepages.sh@37 -- # local node hp 00:03:34.867 15:26:36 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:34.867 15:26:36 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:34.867 15:26:36 -- setup/hugepages.sh@41 -- # echo 0 00:03:34.867 15:26:36 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:34.867 15:26:36 -- setup/hugepages.sh@41 -- # echo 0 00:03:34.867 15:26:36 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:34.867 15:26:36 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:34.867 15:26:36 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:34.867 15:26:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:34.867 15:26:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:34.867 15:26:36 -- common/autotest_common.sh@10 -- # set +x 00:03:35.127 ************************************ 00:03:35.127 START TEST default_setup 00:03:35.127 ************************************ 00:03:35.127 15:26:36 -- common/autotest_common.sh@1111 -- # default_setup 00:03:35.127 15:26:36 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:35.127 15:26:36 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:35.127 15:26:36 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:35.127 15:26:36 -- setup/hugepages.sh@51 -- # shift 00:03:35.127 15:26:36 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:35.127 15:26:36 -- setup/hugepages.sh@52 -- # local node_ids 00:03:35.127 15:26:36 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:35.127 15:26:36 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:35.127 15:26:36 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:35.127 15:26:36 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:35.127 15:26:36 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:35.127 15:26:36 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:35.127 15:26:36 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:35.127 15:26:36 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:35.127 15:26:36 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:35.127 15:26:36 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:35.127 15:26:36 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:35.127 15:26:36 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:35.127 15:26:36 -- setup/hugepages.sh@73 -- # return 0 00:03:35.127 15:26:36 -- setup/hugepages.sh@137 -- # setup output 00:03:35.127 15:26:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.127 15:26:36 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:35.694 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:35.694 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:35.956 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:35.956 15:26:37 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:35.956 15:26:37 -- setup/hugepages.sh@89 -- # local node 00:03:35.956 15:26:37 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:35.956 15:26:37 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:35.956 15:26:37 -- setup/hugepages.sh@92 -- # local surp 00:03:35.956 15:26:37 -- setup/hugepages.sh@93 -- # local resv 00:03:35.956 15:26:37 -- setup/hugepages.sh@94 -- # local anon 00:03:35.956 15:26:37 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:35.956 15:26:37 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:35.956 15:26:37 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:35.956 15:26:37 -- setup/common.sh@18 -- # local node= 00:03:35.956 15:26:37 -- setup/common.sh@19 -- # local var val 00:03:35.956 15:26:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.956 15:26:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.956 15:26:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.956 15:26:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.956 15:26:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.956 15:26:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.956 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.956 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.956 15:26:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7691024 kB' 'MemAvailable: 9475992 kB' 'Buffers: 2436 kB' 'Cached: 1997956 kB' 'SwapCached: 0 kB' 'Active: 851280 kB' 'Inactive: 1271916 kB' 'Active(anon): 133272 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1271916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 852 kB' 'Writeback: 0 kB' 'AnonPages: 124484 kB' 'Mapped: 49008 kB' 'Shmem: 10464 kB' 'KReclaimable: 64416 kB' 'Slab: 138264 kB' 'SReclaimable: 64416 kB' 'SUnreclaim: 73848 kB' 'KernelStack: 6224 kB' 'PageTables: 4468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 355732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54548 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:35.956 15:26:37 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.956 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.956 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.956 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.956 15:26:37 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.956 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.956 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.956 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.956 15:26:37 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.956 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.956 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.956 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.956 15:26:37 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.956 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.956 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.957 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.957 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.958 15:26:37 -- setup/common.sh@33 -- # echo 0 00:03:35.958 15:26:37 -- setup/common.sh@33 -- # return 0 00:03:35.958 15:26:37 -- setup/hugepages.sh@97 -- # anon=0 00:03:35.958 15:26:37 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:35.958 15:26:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.958 15:26:37 -- setup/common.sh@18 -- # local node= 00:03:35.958 15:26:37 -- setup/common.sh@19 -- # local var val 00:03:35.958 15:26:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.958 15:26:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.958 15:26:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.958 15:26:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.958 15:26:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.958 15:26:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.958 15:26:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7691024 kB' 'MemAvailable: 9475992 kB' 'Buffers: 2436 kB' 'Cached: 1997956 kB' 'SwapCached: 0 kB' 'Active: 850808 kB' 'Inactive: 1271916 kB' 'Active(anon): 132800 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1271916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 852 kB' 'Writeback: 0 kB' 'AnonPages: 124232 kB' 'Mapped: 48948 kB' 'Shmem: 10464 kB' 'KReclaimable: 64412 kB' 'Slab: 138264 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 73852 kB' 'KernelStack: 6176 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 355732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54532 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.958 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.958 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.959 15:26:37 -- setup/common.sh@33 -- # echo 0 00:03:35.959 15:26:37 -- setup/common.sh@33 -- # return 0 00:03:35.959 15:26:37 -- setup/hugepages.sh@99 -- # surp=0 00:03:35.959 15:26:37 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:35.959 15:26:37 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:35.959 15:26:37 -- setup/common.sh@18 -- # local node= 00:03:35.959 15:26:37 -- setup/common.sh@19 -- # local var val 00:03:35.959 15:26:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.959 15:26:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.959 15:26:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.959 15:26:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.959 15:26:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.959 15:26:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.959 15:26:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7691024 kB' 'MemAvailable: 9475992 kB' 'Buffers: 2436 kB' 'Cached: 1997956 kB' 'SwapCached: 0 kB' 'Active: 850828 kB' 'Inactive: 1271916 kB' 'Active(anon): 132820 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1271916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 852 kB' 'Writeback: 0 kB' 'AnonPages: 123940 kB' 'Mapped: 48948 kB' 'Shmem: 10464 kB' 'KReclaimable: 64412 kB' 'Slab: 138264 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 73852 kB' 'KernelStack: 6144 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 355732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54532 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.959 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.959 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.960 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.960 15:26:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.961 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.961 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.961 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.961 15:26:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.961 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.961 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.961 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.961 15:26:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.961 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.961 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.961 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.961 15:26:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.961 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.961 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.961 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.961 15:26:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.961 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.961 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.961 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.961 15:26:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.961 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.961 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.961 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.961 15:26:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.961 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.961 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.961 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.961 15:26:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.961 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.961 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.961 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.961 15:26:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.961 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.961 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.961 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.961 15:26:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.961 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.961 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.961 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.961 15:26:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.961 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.961 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.961 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.961 15:26:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.961 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.961 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.961 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.961 15:26:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.961 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.961 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.961 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.961 15:26:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.961 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.961 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.961 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.961 15:26:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.961 15:26:37 -- setup/common.sh@33 -- # echo 0 00:03:35.961 15:26:37 -- setup/common.sh@33 -- # return 0 00:03:35.961 15:26:37 -- setup/hugepages.sh@100 -- # resv=0 00:03:35.961 nr_hugepages=1024 00:03:35.961 15:26:37 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:35.961 resv_hugepages=0 00:03:35.961 15:26:37 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:35.961 surplus_hugepages=0 00:03:35.961 15:26:37 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:35.961 anon_hugepages=0 00:03:35.961 15:26:37 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:35.961 15:26:37 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:35.961 15:26:37 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:35.961 15:26:37 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:35.961 15:26:37 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:35.961 15:26:37 -- setup/common.sh@18 -- # local node= 00:03:35.961 15:26:37 -- setup/common.sh@19 -- # local var val 00:03:35.961 15:26:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.961 15:26:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.961 15:26:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.961 15:26:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.961 15:26:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.961 15:26:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.961 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.961 15:26:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7691276 kB' 'MemAvailable: 9476244 kB' 'Buffers: 2436 kB' 'Cached: 1997956 kB' 'SwapCached: 0 kB' 'Active: 850820 kB' 'Inactive: 1271916 kB' 'Active(anon): 132812 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1271916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 852 kB' 'Writeback: 0 kB' 'AnonPages: 124192 kB' 'Mapped: 48948 kB' 'Shmem: 10464 kB' 'KReclaimable: 64412 kB' 'Slab: 138264 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 73852 kB' 'KernelStack: 6196 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 355732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54532 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:35.961 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.961 15:26:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.961 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.961 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.961 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.961 15:26:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.961 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.961 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.961 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.961 15:26:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.961 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.961 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.961 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.961 15:26:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.961 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.961 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.961 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.961 15:26:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.961 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.961 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.961 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.961 15:26:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.961 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.961 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.961 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.961 15:26:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.961 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.961 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.961 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.961 15:26:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.961 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.961 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.961 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.961 15:26:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.961 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.961 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.961 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.961 15:26:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.962 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.962 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.963 15:26:37 -- setup/common.sh@33 -- # echo 1024 00:03:35.963 15:26:37 -- setup/common.sh@33 -- # return 0 00:03:35.963 15:26:37 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:35.963 15:26:37 -- setup/hugepages.sh@112 -- # get_nodes 00:03:35.963 15:26:37 -- setup/hugepages.sh@27 -- # local node 00:03:35.963 15:26:37 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:35.963 15:26:37 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:35.963 15:26:37 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:35.963 15:26:37 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:35.963 15:26:37 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:35.963 15:26:37 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:35.963 15:26:37 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:35.963 15:26:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.963 15:26:37 -- setup/common.sh@18 -- # local node=0 00:03:35.963 15:26:37 -- setup/common.sh@19 -- # local var val 00:03:35.963 15:26:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.963 15:26:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.963 15:26:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:35.963 15:26:37 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:35.963 15:26:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.963 15:26:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.963 15:26:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7691528 kB' 'MemUsed: 4550436 kB' 'SwapCached: 0 kB' 'Active: 850792 kB' 'Inactive: 1271924 kB' 'Active(anon): 132784 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1271924 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 852 kB' 'Writeback: 0 kB' 'FilePages: 2000396 kB' 'Mapped: 48828 kB' 'AnonPages: 124164 kB' 'Shmem: 10464 kB' 'KernelStack: 6192 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64412 kB' 'Slab: 138264 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 73852 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.963 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.963 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.964 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.964 15:26:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.964 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.964 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.964 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.964 15:26:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.964 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.964 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.964 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.964 15:26:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.964 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.964 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.964 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.964 15:26:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.964 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.964 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.964 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.964 15:26:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.964 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.964 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.964 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.964 15:26:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.964 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.964 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.964 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.964 15:26:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.964 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.964 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.964 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.964 15:26:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.964 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.964 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.964 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.964 15:26:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.964 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.964 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.964 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.964 15:26:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.964 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.964 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.964 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.964 15:26:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.964 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.964 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.964 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.964 15:26:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.964 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.964 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.964 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.964 15:26:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.964 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.964 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.964 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.964 15:26:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.964 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.964 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.964 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.964 15:26:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.964 15:26:37 -- setup/common.sh@32 -- # continue 00:03:35.964 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.964 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.964 15:26:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.964 15:26:37 -- setup/common.sh@33 -- # echo 0 00:03:35.964 15:26:37 -- setup/common.sh@33 -- # return 0 00:03:35.964 15:26:37 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:35.964 15:26:37 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:35.964 15:26:37 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:35.964 15:26:37 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:35.964 node0=1024 expecting 1024 00:03:35.964 15:26:37 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:35.964 15:26:37 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:35.964 00:03:35.964 real 0m1.018s 00:03:35.964 user 0m0.468s 00:03:35.964 sys 0m0.522s 00:03:35.964 15:26:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:35.964 15:26:37 -- common/autotest_common.sh@10 -- # set +x 00:03:35.964 ************************************ 00:03:35.964 END TEST default_setup 00:03:35.964 ************************************ 00:03:35.964 15:26:37 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:35.964 15:26:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:35.964 15:26:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:35.964 15:26:37 -- common/autotest_common.sh@10 -- # set +x 00:03:36.223 ************************************ 00:03:36.223 START TEST per_node_1G_alloc 00:03:36.223 ************************************ 00:03:36.223 15:26:37 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:03:36.223 15:26:37 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:36.223 15:26:37 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:03:36.223 15:26:37 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:36.223 15:26:37 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:36.223 15:26:37 -- setup/hugepages.sh@51 -- # shift 00:03:36.223 15:26:37 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:36.223 15:26:37 -- setup/hugepages.sh@52 -- # local node_ids 00:03:36.223 15:26:37 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:36.223 15:26:37 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:36.223 15:26:37 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:36.223 15:26:37 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:36.223 15:26:37 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:36.223 15:26:37 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:36.223 15:26:37 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:36.223 15:26:37 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:36.223 15:26:37 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:36.223 15:26:37 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:36.223 15:26:37 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:36.223 15:26:37 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:36.223 15:26:37 -- setup/hugepages.sh@73 -- # return 0 00:03:36.223 15:26:37 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:36.223 15:26:37 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:03:36.223 15:26:37 -- setup/hugepages.sh@146 -- # setup output 00:03:36.223 15:26:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.223 15:26:37 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:36.486 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:36.486 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:36.486 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:36.486 15:26:37 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:03:36.486 15:26:37 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:36.486 15:26:37 -- setup/hugepages.sh@89 -- # local node 00:03:36.486 15:26:37 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:36.486 15:26:37 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:36.486 15:26:37 -- setup/hugepages.sh@92 -- # local surp 00:03:36.486 15:26:37 -- setup/hugepages.sh@93 -- # local resv 00:03:36.486 15:26:37 -- setup/hugepages.sh@94 -- # local anon 00:03:36.486 15:26:37 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:36.486 15:26:37 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:36.486 15:26:37 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:36.486 15:26:37 -- setup/common.sh@18 -- # local node= 00:03:36.486 15:26:37 -- setup/common.sh@19 -- # local var val 00:03:36.486 15:26:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.486 15:26:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.486 15:26:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.486 15:26:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.486 15:26:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.486 15:26:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.486 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.486 15:26:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8745160 kB' 'MemAvailable: 10530140 kB' 'Buffers: 2436 kB' 'Cached: 1997964 kB' 'SwapCached: 0 kB' 'Active: 851580 kB' 'Inactive: 1271928 kB' 'Active(anon): 133572 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1271928 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1016 kB' 'Writeback: 0 kB' 'AnonPages: 124680 kB' 'Mapped: 48948 kB' 'Shmem: 10464 kB' 'KReclaimable: 64412 kB' 'Slab: 138292 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 73880 kB' 'KernelStack: 6196 kB' 'PageTables: 4468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 355732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:36.486 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.486 15:26:37 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.486 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.486 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.486 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.486 15:26:37 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.486 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.486 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.486 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.486 15:26:37 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.486 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.486 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.486 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.486 15:26:37 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.486 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.486 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.486 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.486 15:26:37 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.486 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.486 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.486 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.486 15:26:37 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.486 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.486 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.486 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.486 15:26:37 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.486 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.486 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.486 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.486 15:26:37 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.486 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 15:26:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.487 15:26:37 -- setup/common.sh@33 -- # echo 0 00:03:36.487 15:26:37 -- setup/common.sh@33 -- # return 0 00:03:36.487 15:26:37 -- setup/hugepages.sh@97 -- # anon=0 00:03:36.487 15:26:37 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:36.487 15:26:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.487 15:26:37 -- setup/common.sh@18 -- # local node= 00:03:36.487 15:26:37 -- setup/common.sh@19 -- # local var val 00:03:36.488 15:26:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.488 15:26:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.488 15:26:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.488 15:26:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.488 15:26:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.488 15:26:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 15:26:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8745160 kB' 'MemAvailable: 10530140 kB' 'Buffers: 2436 kB' 'Cached: 1997964 kB' 'SwapCached: 0 kB' 'Active: 850972 kB' 'Inactive: 1271928 kB' 'Active(anon): 132964 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1271928 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1016 kB' 'Writeback: 0 kB' 'AnonPages: 124036 kB' 'Mapped: 48836 kB' 'Shmem: 10464 kB' 'KReclaimable: 64412 kB' 'Slab: 138292 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 73880 kB' 'KernelStack: 6176 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 355732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.488 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 15:26:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.489 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 15:26:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.489 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 15:26:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.489 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 15:26:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.489 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 15:26:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.489 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 15:26:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.489 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 15:26:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.489 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 15:26:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.489 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 15:26:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.489 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 15:26:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.489 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 15:26:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.489 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 15:26:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.489 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 15:26:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.489 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 15:26:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.489 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 15:26:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.489 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 15:26:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.489 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 15:26:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.489 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 15:26:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.489 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 15:26:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.489 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 15:26:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.489 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 15:26:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.489 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 15:26:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.489 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 15:26:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.489 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 15:26:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 15:26:37 -- setup/common.sh@33 -- # echo 0 00:03:36.489 15:26:37 -- setup/common.sh@33 -- # return 0 00:03:36.489 15:26:37 -- setup/hugepages.sh@99 -- # surp=0 00:03:36.489 15:26:37 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:36.489 15:26:37 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:36.490 15:26:37 -- setup/common.sh@18 -- # local node= 00:03:36.490 15:26:37 -- setup/common.sh@19 -- # local var val 00:03:36.490 15:26:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.490 15:26:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.490 15:26:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.490 15:26:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.490 15:26:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.490 15:26:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.490 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.490 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.490 15:26:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8745160 kB' 'MemAvailable: 10530140 kB' 'Buffers: 2436 kB' 'Cached: 1997964 kB' 'SwapCached: 0 kB' 'Active: 850708 kB' 'Inactive: 1271928 kB' 'Active(anon): 132700 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1271928 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1016 kB' 'Writeback: 0 kB' 'AnonPages: 123816 kB' 'Mapped: 48836 kB' 'Shmem: 10464 kB' 'KReclaimable: 64412 kB' 'Slab: 138292 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 73880 kB' 'KernelStack: 6176 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 355732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:36.490 15:26:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.490 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.490 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.490 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.490 15:26:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.490 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.490 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.490 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.490 15:26:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.490 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.490 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.490 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.490 15:26:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.490 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.490 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.490 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.490 15:26:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.490 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.490 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.490 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.490 15:26:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.490 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.490 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.490 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.490 15:26:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.490 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.490 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.490 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.490 15:26:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.490 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.490 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.490 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.490 15:26:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.490 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.490 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.490 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.490 15:26:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.490 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.490 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.490 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.490 15:26:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.490 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.490 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.490 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.490 15:26:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.490 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.490 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.490 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.490 15:26:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.490 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.490 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.490 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.490 15:26:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.490 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.490 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.490 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.490 15:26:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.490 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.490 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.490 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.490 15:26:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.490 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.490 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.490 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.490 15:26:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.490 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.490 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.490 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.490 15:26:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.490 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.490 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.490 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.490 15:26:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.490 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.491 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.491 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.491 15:26:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.491 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.491 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.491 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.491 15:26:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.491 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.491 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.491 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.491 15:26:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.491 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.491 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.491 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.491 15:26:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.491 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.491 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.491 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.491 15:26:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.491 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.491 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.491 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.491 15:26:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.491 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.491 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.491 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.491 15:26:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.491 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.491 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.491 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.491 15:26:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.491 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.491 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.491 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.491 15:26:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.491 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.491 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.491 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.491 15:26:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.491 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.491 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.491 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.491 15:26:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.491 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.491 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.491 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.491 15:26:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.491 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.491 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.491 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.491 15:26:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.491 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.491 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.491 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.491 15:26:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.491 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.491 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.491 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.491 15:26:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.491 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.491 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.491 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.491 15:26:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.491 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.491 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.491 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.491 15:26:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.491 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.491 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.491 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.491 15:26:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.491 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.491 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.491 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.491 15:26:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.491 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.491 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.491 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.491 15:26:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.491 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.491 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.491 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.491 15:26:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.491 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.491 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.491 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.491 15:26:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.491 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.491 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.492 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.492 15:26:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.492 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.492 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.492 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.492 15:26:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.492 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.492 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.492 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.492 15:26:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.492 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.492 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.492 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.492 15:26:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.492 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.492 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.492 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.492 15:26:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.492 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.492 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.492 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.492 15:26:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.492 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.492 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.492 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.492 15:26:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.492 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.492 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.492 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.492 15:26:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.492 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.492 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.492 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.492 15:26:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.492 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.492 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.492 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.492 15:26:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.492 15:26:37 -- setup/common.sh@33 -- # echo 0 00:03:36.492 15:26:37 -- setup/common.sh@33 -- # return 0 00:03:36.492 15:26:37 -- setup/hugepages.sh@100 -- # resv=0 00:03:36.492 nr_hugepages=512 00:03:36.492 15:26:37 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:36.492 resv_hugepages=0 00:03:36.492 15:26:37 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:36.492 surplus_hugepages=0 00:03:36.492 15:26:37 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:36.492 anon_hugepages=0 00:03:36.492 15:26:37 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:36.492 15:26:37 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:36.492 15:26:37 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:36.492 15:26:37 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:36.492 15:26:37 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:36.492 15:26:37 -- setup/common.sh@18 -- # local node= 00:03:36.492 15:26:37 -- setup/common.sh@19 -- # local var val 00:03:36.492 15:26:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.492 15:26:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.492 15:26:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.492 15:26:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.492 15:26:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.492 15:26:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.492 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.492 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.492 15:26:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8745160 kB' 'MemAvailable: 10530140 kB' 'Buffers: 2436 kB' 'Cached: 1997964 kB' 'SwapCached: 0 kB' 'Active: 850956 kB' 'Inactive: 1271928 kB' 'Active(anon): 132948 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1271928 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1016 kB' 'Writeback: 0 kB' 'AnonPages: 124064 kB' 'Mapped: 48836 kB' 'Shmem: 10464 kB' 'KReclaimable: 64412 kB' 'Slab: 138288 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 73876 kB' 'KernelStack: 6160 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 355732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:36.492 15:26:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.492 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.492 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.492 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.492 15:26:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.492 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.492 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.492 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.492 15:26:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.492 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.492 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.492 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.492 15:26:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.493 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.493 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.493 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.493 15:26:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.493 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.493 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.493 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.493 15:26:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.493 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.493 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.493 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.493 15:26:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.493 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.493 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.493 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.493 15:26:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.493 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.493 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.493 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.493 15:26:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.493 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.493 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.493 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.493 15:26:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.493 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.493 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.493 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.493 15:26:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.493 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.493 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.493 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.493 15:26:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.493 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.493 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.493 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.493 15:26:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.493 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.493 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.493 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.493 15:26:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.493 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.493 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.493 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.493 15:26:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.493 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.493 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.493 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.493 15:26:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.493 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.493 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.493 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.493 15:26:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.493 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.493 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.493 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.493 15:26:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.493 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.493 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.493 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.493 15:26:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.493 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.493 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.493 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.493 15:26:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.493 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.493 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.494 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.494 15:26:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.494 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.494 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.494 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.494 15:26:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.494 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.494 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.494 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.494 15:26:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.494 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.494 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.494 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.494 15:26:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.494 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.494 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.494 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.494 15:26:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.494 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.494 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.494 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.494 15:26:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.494 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.494 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.494 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.494 15:26:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.494 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.494 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.494 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.494 15:26:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.494 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.494 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.494 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.494 15:26:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.494 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.494 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.494 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.494 15:26:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.494 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.494 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.494 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.494 15:26:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.494 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.494 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.494 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.494 15:26:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.494 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.494 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.494 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.494 15:26:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.494 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.494 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.494 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.494 15:26:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.494 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.494 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.494 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.494 15:26:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.494 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.494 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.494 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.494 15:26:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.494 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.494 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.494 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.494 15:26:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.494 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.494 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.494 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.494 15:26:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.494 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.494 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.494 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.494 15:26:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.494 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.494 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.494 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.494 15:26:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.494 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.494 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.494 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.494 15:26:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.494 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.494 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.494 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.494 15:26:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.494 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.494 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.495 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.495 15:26:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.495 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.495 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.495 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.495 15:26:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.495 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.495 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.495 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.495 15:26:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.495 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.495 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.495 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.495 15:26:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.495 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.495 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.495 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.495 15:26:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.495 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.495 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.495 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.495 15:26:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.495 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.495 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.495 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.495 15:26:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.495 15:26:37 -- setup/common.sh@33 -- # echo 512 00:03:36.495 15:26:37 -- setup/common.sh@33 -- # return 0 00:03:36.495 15:26:37 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:36.495 15:26:37 -- setup/hugepages.sh@112 -- # get_nodes 00:03:36.495 15:26:37 -- setup/hugepages.sh@27 -- # local node 00:03:36.495 15:26:37 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.495 15:26:37 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:36.495 15:26:37 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:36.495 15:26:37 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:36.495 15:26:37 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:36.495 15:26:37 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:36.495 15:26:37 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:36.495 15:26:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.495 15:26:37 -- setup/common.sh@18 -- # local node=0 00:03:36.495 15:26:37 -- setup/common.sh@19 -- # local var val 00:03:36.495 15:26:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.495 15:26:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.495 15:26:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:36.495 15:26:37 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:36.495 15:26:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.495 15:26:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.495 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.495 15:26:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8746396 kB' 'MemUsed: 3495568 kB' 'SwapCached: 0 kB' 'Active: 850944 kB' 'Inactive: 1271928 kB' 'Active(anon): 132936 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1271928 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1016 kB' 'Writeback: 0 kB' 'FilePages: 2000400 kB' 'Mapped: 48836 kB' 'AnonPages: 124096 kB' 'Shmem: 10464 kB' 'KernelStack: 6240 kB' 'PageTables: 4508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64412 kB' 'Slab: 138288 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 73876 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:36.495 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.495 15:26:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.495 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.495 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.495 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.495 15:26:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.495 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.495 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.495 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.495 15:26:37 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.495 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.495 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.495 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.495 15:26:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.495 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.495 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.495 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.495 15:26:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.495 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.495 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.495 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.495 15:26:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.495 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.495 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.495 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.495 15:26:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.495 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.495 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.496 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.496 15:26:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.496 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.496 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.496 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.496 15:26:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.496 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.496 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.496 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.496 15:26:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.496 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.496 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.496 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.496 15:26:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.496 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.496 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.496 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.496 15:26:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.496 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.496 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.496 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.496 15:26:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.496 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.496 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.496 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.496 15:26:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.496 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.496 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.496 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.496 15:26:37 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.496 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.496 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.496 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.755 15:26:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.755 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.755 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.755 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.755 15:26:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.755 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.755 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.755 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.755 15:26:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.755 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.755 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.755 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.755 15:26:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.755 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.755 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.755 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.755 15:26:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.755 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.755 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.755 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.755 15:26:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.755 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.755 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.755 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.755 15:26:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.755 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.755 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.755 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.755 15:26:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.755 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.755 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.755 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.755 15:26:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.755 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.755 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.755 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.755 15:26:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.755 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.755 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.755 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.755 15:26:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.755 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.755 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.755 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.755 15:26:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.755 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.755 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.755 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.755 15:26:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.755 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.755 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.755 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.755 15:26:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.755 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.755 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.755 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.755 15:26:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.755 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.755 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.755 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.755 15:26:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.755 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.755 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.755 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.755 15:26:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.755 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.755 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.755 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.755 15:26:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.755 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.755 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.755 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.755 15:26:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.755 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.755 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.755 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.755 15:26:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.755 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.755 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.755 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.755 15:26:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.755 15:26:37 -- setup/common.sh@32 -- # continue 00:03:36.755 15:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.755 15:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.755 15:26:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.755 15:26:37 -- setup/common.sh@33 -- # echo 0 00:03:36.755 15:26:37 -- setup/common.sh@33 -- # return 0 00:03:36.755 15:26:37 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:36.755 15:26:37 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:36.755 15:26:37 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:36.755 15:26:37 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:36.755 node0=512 expecting 512 00:03:36.755 15:26:37 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:36.755 15:26:37 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:36.755 00:03:36.755 real 0m0.497s 00:03:36.755 user 0m0.256s 00:03:36.755 sys 0m0.276s 00:03:36.755 15:26:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:36.755 15:26:37 -- common/autotest_common.sh@10 -- # set +x 00:03:36.755 ************************************ 00:03:36.755 END TEST per_node_1G_alloc 00:03:36.755 ************************************ 00:03:36.755 15:26:37 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:36.755 15:26:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:36.755 15:26:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:36.755 15:26:37 -- common/autotest_common.sh@10 -- # set +x 00:03:36.755 ************************************ 00:03:36.755 START TEST even_2G_alloc 00:03:36.755 ************************************ 00:03:36.755 15:26:38 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:03:36.755 15:26:38 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:36.755 15:26:38 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:36.755 15:26:38 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:36.755 15:26:38 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:36.755 15:26:38 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:36.755 15:26:38 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:36.755 15:26:38 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:36.755 15:26:38 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:36.755 15:26:38 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:36.755 15:26:38 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:36.755 15:26:38 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:36.756 15:26:38 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:36.756 15:26:38 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:36.756 15:26:38 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:36.756 15:26:38 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:36.756 15:26:38 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:03:36.756 15:26:38 -- setup/hugepages.sh@83 -- # : 0 00:03:36.756 15:26:38 -- setup/hugepages.sh@84 -- # : 0 00:03:36.756 15:26:38 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:36.756 15:26:38 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:36.756 15:26:38 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:36.756 15:26:38 -- setup/hugepages.sh@153 -- # setup output 00:03:36.756 15:26:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.756 15:26:38 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:37.016 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:37.016 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:37.016 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:37.016 15:26:38 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:37.016 15:26:38 -- setup/hugepages.sh@89 -- # local node 00:03:37.016 15:26:38 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:37.016 15:26:38 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:37.016 15:26:38 -- setup/hugepages.sh@92 -- # local surp 00:03:37.016 15:26:38 -- setup/hugepages.sh@93 -- # local resv 00:03:37.016 15:26:38 -- setup/hugepages.sh@94 -- # local anon 00:03:37.016 15:26:38 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:37.016 15:26:38 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:37.016 15:26:38 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:37.016 15:26:38 -- setup/common.sh@18 -- # local node= 00:03:37.016 15:26:38 -- setup/common.sh@19 -- # local var val 00:03:37.016 15:26:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.016 15:26:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.016 15:26:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.016 15:26:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.016 15:26:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.016 15:26:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.016 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.016 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.016 15:26:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7704036 kB' 'MemAvailable: 9489016 kB' 'Buffers: 2436 kB' 'Cached: 1997964 kB' 'SwapCached: 0 kB' 'Active: 851196 kB' 'Inactive: 1271928 kB' 'Active(anon): 133188 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1271928 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1188 kB' 'Writeback: 0 kB' 'AnonPages: 124304 kB' 'Mapped: 48848 kB' 'Shmem: 10464 kB' 'KReclaimable: 64412 kB' 'Slab: 138328 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 73916 kB' 'KernelStack: 6184 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 354940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:37.016 15:26:38 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.016 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.016 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.016 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.016 15:26:38 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.016 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.016 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.016 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.016 15:26:38 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.016 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.016 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.016 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.016 15:26:38 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.016 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.016 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.016 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.016 15:26:38 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.016 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.016 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.016 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.016 15:26:38 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.016 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.016 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.016 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.016 15:26:38 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.016 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.016 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.016 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.016 15:26:38 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.016 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.016 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.016 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.016 15:26:38 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.016 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.016 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.017 15:26:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.017 15:26:38 -- setup/common.sh@33 -- # echo 0 00:03:37.017 15:26:38 -- setup/common.sh@33 -- # return 0 00:03:37.017 15:26:38 -- setup/hugepages.sh@97 -- # anon=0 00:03:37.017 15:26:38 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:37.017 15:26:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:37.017 15:26:38 -- setup/common.sh@18 -- # local node= 00:03:37.017 15:26:38 -- setup/common.sh@19 -- # local var val 00:03:37.017 15:26:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.017 15:26:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.017 15:26:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.017 15:26:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.017 15:26:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.017 15:26:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.017 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.018 15:26:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7704036 kB' 'MemAvailable: 9489016 kB' 'Buffers: 2436 kB' 'Cached: 1997964 kB' 'SwapCached: 0 kB' 'Active: 850940 kB' 'Inactive: 1271928 kB' 'Active(anon): 132932 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1271928 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1188 kB' 'Writeback: 0 kB' 'AnonPages: 124068 kB' 'Mapped: 48848 kB' 'Shmem: 10464 kB' 'KReclaimable: 64412 kB' 'Slab: 138328 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 73916 kB' 'KernelStack: 6152 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 354940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.018 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.018 15:26:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.019 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.019 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.019 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.019 15:26:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.019 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.019 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.019 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.019 15:26:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.019 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.019 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.019 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.019 15:26:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.019 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.019 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.019 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.019 15:26:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.019 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.019 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.019 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.019 15:26:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.019 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.019 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.019 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.019 15:26:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.019 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.019 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.019 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.019 15:26:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.019 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.019 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.019 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.019 15:26:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.019 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.280 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.280 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.280 15:26:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.280 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.280 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.280 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.280 15:26:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.280 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.280 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.280 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.280 15:26:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.280 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.280 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.280 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.280 15:26:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.280 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.280 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.280 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.280 15:26:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.280 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.280 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.280 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.280 15:26:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.280 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.280 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.280 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.280 15:26:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.280 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.280 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.280 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.280 15:26:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.280 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.280 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.280 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.280 15:26:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.280 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.280 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.280 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.280 15:26:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.280 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.280 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.280 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.280 15:26:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.280 15:26:38 -- setup/common.sh@33 -- # echo 0 00:03:37.280 15:26:38 -- setup/common.sh@33 -- # return 0 00:03:37.280 15:26:38 -- setup/hugepages.sh@99 -- # surp=0 00:03:37.280 15:26:38 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:37.280 15:26:38 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:37.280 15:26:38 -- setup/common.sh@18 -- # local node= 00:03:37.280 15:26:38 -- setup/common.sh@19 -- # local var val 00:03:37.280 15:26:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.280 15:26:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.280 15:26:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.280 15:26:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.280 15:26:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.280 15:26:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.280 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.280 15:26:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7704036 kB' 'MemAvailable: 9489016 kB' 'Buffers: 2436 kB' 'Cached: 1997964 kB' 'SwapCached: 0 kB' 'Active: 850728 kB' 'Inactive: 1271928 kB' 'Active(anon): 132720 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1271928 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1188 kB' 'Writeback: 0 kB' 'AnonPages: 124096 kB' 'Mapped: 48848 kB' 'Shmem: 10464 kB' 'KReclaimable: 64412 kB' 'Slab: 138328 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 73916 kB' 'KernelStack: 6168 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 354940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:37.280 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.280 15:26:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.280 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.280 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.280 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.280 15:26:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.280 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.280 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.280 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.280 15:26:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.280 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.280 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.280 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.280 15:26:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.280 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.280 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.280 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.280 15:26:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.280 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.280 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.280 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.280 15:26:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.280 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.280 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.280 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.280 15:26:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.280 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.280 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.280 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.280 15:26:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.280 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.280 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.280 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.280 15:26:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.280 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.280 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.280 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.280 15:26:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.280 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.280 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.280 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.280 15:26:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.280 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.280 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.280 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.280 15:26:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.281 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.281 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.282 15:26:38 -- setup/common.sh@33 -- # echo 0 00:03:37.282 15:26:38 -- setup/common.sh@33 -- # return 0 00:03:37.282 15:26:38 -- setup/hugepages.sh@100 -- # resv=0 00:03:37.282 nr_hugepages=1024 00:03:37.282 15:26:38 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:37.282 resv_hugepages=0 00:03:37.282 15:26:38 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:37.282 surplus_hugepages=0 00:03:37.282 15:26:38 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:37.282 anon_hugepages=0 00:03:37.282 15:26:38 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:37.282 15:26:38 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:37.282 15:26:38 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:37.282 15:26:38 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:37.282 15:26:38 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:37.282 15:26:38 -- setup/common.sh@18 -- # local node= 00:03:37.282 15:26:38 -- setup/common.sh@19 -- # local var val 00:03:37.282 15:26:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.282 15:26:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.282 15:26:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.282 15:26:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.282 15:26:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.282 15:26:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.282 15:26:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7704296 kB' 'MemAvailable: 9489276 kB' 'Buffers: 2436 kB' 'Cached: 1997964 kB' 'SwapCached: 0 kB' 'Active: 850836 kB' 'Inactive: 1271928 kB' 'Active(anon): 132828 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1271928 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1188 kB' 'Writeback: 0 kB' 'AnonPages: 124020 kB' 'Mapped: 48848 kB' 'Shmem: 10464 kB' 'KReclaimable: 64412 kB' 'Slab: 138328 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 73916 kB' 'KernelStack: 6184 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 355700 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.282 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.282 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.283 15:26:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.283 15:26:38 -- setup/common.sh@33 -- # echo 1024 00:03:37.283 15:26:38 -- setup/common.sh@33 -- # return 0 00:03:37.283 15:26:38 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:37.283 15:26:38 -- setup/hugepages.sh@112 -- # get_nodes 00:03:37.283 15:26:38 -- setup/hugepages.sh@27 -- # local node 00:03:37.283 15:26:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:37.283 15:26:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:37.283 15:26:38 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:37.283 15:26:38 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:37.283 15:26:38 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:37.283 15:26:38 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:37.283 15:26:38 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:37.283 15:26:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:37.283 15:26:38 -- setup/common.sh@18 -- # local node=0 00:03:37.283 15:26:38 -- setup/common.sh@19 -- # local var val 00:03:37.283 15:26:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.283 15:26:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.283 15:26:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:37.283 15:26:38 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:37.283 15:26:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.283 15:26:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.283 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.283 15:26:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7704292 kB' 'MemUsed: 4537672 kB' 'SwapCached: 0 kB' 'Active: 851024 kB' 'Inactive: 1271924 kB' 'Active(anon): 133016 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1271924 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1188 kB' 'Writeback: 0 kB' 'FilePages: 2000396 kB' 'Mapped: 48912 kB' 'AnonPages: 124216 kB' 'Shmem: 10464 kB' 'KernelStack: 6184 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64412 kB' 'Slab: 138328 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 73916 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.284 15:26:38 -- setup/common.sh@32 -- # continue 00:03:37.284 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.285 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.285 15:26:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.285 15:26:38 -- setup/common.sh@33 -- # echo 0 00:03:37.285 15:26:38 -- setup/common.sh@33 -- # return 0 00:03:37.285 15:26:38 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:37.285 15:26:38 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:37.285 15:26:38 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:37.285 15:26:38 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:37.285 node0=1024 expecting 1024 00:03:37.285 15:26:38 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:37.285 15:26:38 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:37.285 00:03:37.285 real 0m0.506s 00:03:37.285 user 0m0.251s 00:03:37.285 sys 0m0.290s 00:03:37.285 15:26:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:37.285 15:26:38 -- common/autotest_common.sh@10 -- # set +x 00:03:37.285 ************************************ 00:03:37.285 END TEST even_2G_alloc 00:03:37.285 ************************************ 00:03:37.285 15:26:38 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:37.285 15:26:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:37.285 15:26:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:37.285 15:26:38 -- common/autotest_common.sh@10 -- # set +x 00:03:37.285 ************************************ 00:03:37.285 START TEST odd_alloc 00:03:37.285 ************************************ 00:03:37.285 15:26:38 -- common/autotest_common.sh@1111 -- # odd_alloc 00:03:37.285 15:26:38 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:37.285 15:26:38 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:37.285 15:26:38 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:37.285 15:26:38 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:37.285 15:26:38 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:37.285 15:26:38 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:37.285 15:26:38 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:37.285 15:26:38 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:37.285 15:26:38 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:37.285 15:26:38 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:37.285 15:26:38 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:37.285 15:26:38 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:37.285 15:26:38 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:37.285 15:26:38 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:37.285 15:26:38 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:37.285 15:26:38 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:03:37.285 15:26:38 -- setup/hugepages.sh@83 -- # : 0 00:03:37.285 15:26:38 -- setup/hugepages.sh@84 -- # : 0 00:03:37.285 15:26:38 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:37.285 15:26:38 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:37.285 15:26:38 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:37.285 15:26:38 -- setup/hugepages.sh@160 -- # setup output 00:03:37.285 15:26:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:37.285 15:26:38 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:37.898 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:37.898 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:37.898 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:37.898 15:26:39 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:37.898 15:26:39 -- setup/hugepages.sh@89 -- # local node 00:03:37.898 15:26:39 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:37.898 15:26:39 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:37.898 15:26:39 -- setup/hugepages.sh@92 -- # local surp 00:03:37.898 15:26:39 -- setup/hugepages.sh@93 -- # local resv 00:03:37.898 15:26:39 -- setup/hugepages.sh@94 -- # local anon 00:03:37.898 15:26:39 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:37.898 15:26:39 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:37.898 15:26:39 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:37.898 15:26:39 -- setup/common.sh@18 -- # local node= 00:03:37.898 15:26:39 -- setup/common.sh@19 -- # local var val 00:03:37.898 15:26:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.898 15:26:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.898 15:26:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.898 15:26:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.898 15:26:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.898 15:26:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.898 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.898 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.898 15:26:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7707876 kB' 'MemAvailable: 9492892 kB' 'Buffers: 2436 kB' 'Cached: 1998000 kB' 'SwapCached: 0 kB' 'Active: 851012 kB' 'Inactive: 1271964 kB' 'Active(anon): 133004 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1271964 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1344 kB' 'Writeback: 0 kB' 'AnonPages: 124392 kB' 'Mapped: 49052 kB' 'Shmem: 10464 kB' 'KReclaimable: 64412 kB' 'Slab: 138288 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 73876 kB' 'KernelStack: 6152 kB' 'PageTables: 4316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459984 kB' 'Committed_AS: 355560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:37.898 15:26:39 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.898 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.898 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.898 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.898 15:26:39 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.898 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.898 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.898 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.898 15:26:39 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.898 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.898 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.898 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.898 15:26:39 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.898 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.898 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.898 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.898 15:26:39 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.898 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.898 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.898 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.898 15:26:39 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.898 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.898 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.898 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.898 15:26:39 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.898 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.898 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.898 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.898 15:26:39 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.898 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.898 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.898 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.898 15:26:39 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.898 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.898 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.898 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.898 15:26:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.898 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.898 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.898 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.898 15:26:39 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.898 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.898 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.898 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.898 15:26:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.898 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.898 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.898 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.898 15:26:39 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.898 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.898 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.898 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.898 15:26:39 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.898 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.898 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.898 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.898 15:26:39 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.898 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.898 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.898 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.898 15:26:39 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.898 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.898 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.898 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.898 15:26:39 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.898 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.898 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.898 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.898 15:26:39 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.898 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.898 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.898 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.898 15:26:39 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.898 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.898 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.898 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.898 15:26:39 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.898 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.898 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.898 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.898 15:26:39 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.898 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.898 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.898 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.898 15:26:39 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.898 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.898 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.898 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.898 15:26:39 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.898 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.898 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.898 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.898 15:26:39 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.898 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.898 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.899 15:26:39 -- setup/common.sh@33 -- # echo 0 00:03:37.899 15:26:39 -- setup/common.sh@33 -- # return 0 00:03:37.899 15:26:39 -- setup/hugepages.sh@97 -- # anon=0 00:03:37.899 15:26:39 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:37.899 15:26:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:37.899 15:26:39 -- setup/common.sh@18 -- # local node= 00:03:37.899 15:26:39 -- setup/common.sh@19 -- # local var val 00:03:37.899 15:26:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.899 15:26:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.899 15:26:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.899 15:26:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.899 15:26:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.899 15:26:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.899 15:26:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7707876 kB' 'MemAvailable: 9492892 kB' 'Buffers: 2436 kB' 'Cached: 1998000 kB' 'SwapCached: 0 kB' 'Active: 850672 kB' 'Inactive: 1271964 kB' 'Active(anon): 132664 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1271964 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1344 kB' 'Writeback: 0 kB' 'AnonPages: 123816 kB' 'Mapped: 48844 kB' 'Shmem: 10464 kB' 'KReclaimable: 64412 kB' 'Slab: 138256 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 73844 kB' 'KernelStack: 6180 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459984 kB' 'Committed_AS: 355560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54548 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.899 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.899 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.900 15:26:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.900 15:26:39 -- setup/common.sh@33 -- # echo 0 00:03:37.900 15:26:39 -- setup/common.sh@33 -- # return 0 00:03:37.900 15:26:39 -- setup/hugepages.sh@99 -- # surp=0 00:03:37.900 15:26:39 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:37.900 15:26:39 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:37.900 15:26:39 -- setup/common.sh@18 -- # local node= 00:03:37.900 15:26:39 -- setup/common.sh@19 -- # local var val 00:03:37.900 15:26:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.900 15:26:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.900 15:26:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.900 15:26:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.900 15:26:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.900 15:26:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.900 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.901 15:26:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7707876 kB' 'MemAvailable: 9492892 kB' 'Buffers: 2436 kB' 'Cached: 1998000 kB' 'SwapCached: 0 kB' 'Active: 850656 kB' 'Inactive: 1271964 kB' 'Active(anon): 132648 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1271964 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1344 kB' 'Writeback: 0 kB' 'AnonPages: 124060 kB' 'Mapped: 48844 kB' 'Shmem: 10464 kB' 'KReclaimable: 64412 kB' 'Slab: 138256 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 73844 kB' 'KernelStack: 6164 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459984 kB' 'Committed_AS: 355560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54548 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.901 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.901 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.902 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.902 15:26:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.902 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.902 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.902 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.902 15:26:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.902 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.902 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.902 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.902 15:26:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.902 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.902 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.902 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.902 15:26:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.902 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.902 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.902 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.902 15:26:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.902 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.902 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.902 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.902 15:26:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.902 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.902 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.902 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.902 15:26:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.902 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.902 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.902 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.902 15:26:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.902 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.902 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.902 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.902 15:26:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.902 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.902 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.902 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.902 15:26:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.902 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.902 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.902 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.902 15:26:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.902 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.902 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.902 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.902 15:26:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.902 15:26:39 -- setup/common.sh@33 -- # echo 0 00:03:37.902 15:26:39 -- setup/common.sh@33 -- # return 0 00:03:37.902 15:26:39 -- setup/hugepages.sh@100 -- # resv=0 00:03:37.902 15:26:39 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:37.902 nr_hugepages=1025 00:03:37.902 resv_hugepages=0 00:03:37.902 15:26:39 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:37.902 surplus_hugepages=0 00:03:37.902 15:26:39 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:37.902 anon_hugepages=0 00:03:37.902 15:26:39 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:37.902 15:26:39 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:37.902 15:26:39 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:37.902 15:26:39 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:37.902 15:26:39 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:37.902 15:26:39 -- setup/common.sh@18 -- # local node= 00:03:37.902 15:26:39 -- setup/common.sh@19 -- # local var val 00:03:37.902 15:26:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.902 15:26:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.902 15:26:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.902 15:26:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.902 15:26:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.902 15:26:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.902 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.902 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.902 15:26:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7710316 kB' 'MemAvailable: 9495336 kB' 'Buffers: 2436 kB' 'Cached: 1998004 kB' 'SwapCached: 0 kB' 'Active: 850608 kB' 'Inactive: 1271968 kB' 'Active(anon): 132600 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1271968 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1344 kB' 'Writeback: 0 kB' 'AnonPages: 123732 kB' 'Mapped: 48860 kB' 'Shmem: 10464 kB' 'KReclaimable: 64412 kB' 'Slab: 138256 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 73844 kB' 'KernelStack: 6160 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459984 kB' 'Committed_AS: 355560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54548 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:37.902 15:26:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.902 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.902 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.902 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.902 15:26:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.902 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.902 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.902 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.902 15:26:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.902 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.902 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.902 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.902 15:26:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.902 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.902 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.902 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.902 15:26:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.902 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.902 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.902 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.902 15:26:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.902 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.902 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.902 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.902 15:26:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.902 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.902 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.902 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.902 15:26:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.902 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.902 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.902 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.902 15:26:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.902 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.902 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.903 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.903 15:26:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.904 15:26:39 -- setup/common.sh@33 -- # echo 1025 00:03:37.904 15:26:39 -- setup/common.sh@33 -- # return 0 00:03:37.904 15:26:39 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:37.904 15:26:39 -- setup/hugepages.sh@112 -- # get_nodes 00:03:37.904 15:26:39 -- setup/hugepages.sh@27 -- # local node 00:03:37.904 15:26:39 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:37.904 15:26:39 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:03:37.904 15:26:39 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:37.904 15:26:39 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:37.904 15:26:39 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:37.904 15:26:39 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:37.904 15:26:39 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:37.904 15:26:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:37.904 15:26:39 -- setup/common.sh@18 -- # local node=0 00:03:37.904 15:26:39 -- setup/common.sh@19 -- # local var val 00:03:37.904 15:26:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.904 15:26:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.904 15:26:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:37.904 15:26:39 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:37.904 15:26:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.904 15:26:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.904 15:26:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7710316 kB' 'MemUsed: 4531648 kB' 'SwapCached: 0 kB' 'Active: 850612 kB' 'Inactive: 1271968 kB' 'Active(anon): 132604 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1271968 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1344 kB' 'Writeback: 0 kB' 'FilePages: 2000440 kB' 'Mapped: 48860 kB' 'AnonPages: 123996 kB' 'Shmem: 10464 kB' 'KernelStack: 6144 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64412 kB' 'Slab: 138256 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 73844 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.904 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.904 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.905 15:26:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.905 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.905 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.905 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.905 15:26:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.905 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.905 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.905 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.905 15:26:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.905 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.905 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.905 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.905 15:26:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.905 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.905 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.905 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.905 15:26:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.905 15:26:39 -- setup/common.sh@32 -- # continue 00:03:37.905 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.905 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.905 15:26:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.905 15:26:39 -- setup/common.sh@33 -- # echo 0 00:03:37.905 15:26:39 -- setup/common.sh@33 -- # return 0 00:03:37.905 15:26:39 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:37.905 15:26:39 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:37.905 15:26:39 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:37.905 15:26:39 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:37.905 node0=1025 expecting 1025 00:03:37.905 15:26:39 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:03:37.905 15:26:39 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:03:37.905 00:03:37.905 real 0m0.520s 00:03:37.905 user 0m0.264s 00:03:37.905 sys 0m0.288s 00:03:37.905 15:26:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:37.905 15:26:39 -- common/autotest_common.sh@10 -- # set +x 00:03:37.905 ************************************ 00:03:37.905 END TEST odd_alloc 00:03:37.905 ************************************ 00:03:37.905 15:26:39 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:37.905 15:26:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:37.905 15:26:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:37.905 15:26:39 -- common/autotest_common.sh@10 -- # set +x 00:03:37.905 ************************************ 00:03:37.905 START TEST custom_alloc 00:03:37.905 ************************************ 00:03:37.905 15:26:39 -- common/autotest_common.sh@1111 -- # custom_alloc 00:03:37.905 15:26:39 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:37.905 15:26:39 -- setup/hugepages.sh@169 -- # local node 00:03:37.905 15:26:39 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:37.905 15:26:39 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:37.905 15:26:39 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:37.905 15:26:39 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:37.905 15:26:39 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:37.905 15:26:39 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:37.905 15:26:39 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:37.905 15:26:39 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:37.905 15:26:39 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:37.905 15:26:39 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:37.905 15:26:39 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:37.905 15:26:39 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:37.905 15:26:39 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:37.905 15:26:39 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:37.905 15:26:39 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:37.905 15:26:39 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:37.905 15:26:39 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:37.905 15:26:39 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:37.905 15:26:39 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:37.905 15:26:39 -- setup/hugepages.sh@83 -- # : 0 00:03:37.905 15:26:39 -- setup/hugepages.sh@84 -- # : 0 00:03:37.905 15:26:39 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:37.905 15:26:39 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:37.905 15:26:39 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:03:37.905 15:26:39 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:37.905 15:26:39 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:37.905 15:26:39 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:37.905 15:26:39 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:37.905 15:26:39 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:37.905 15:26:39 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:37.905 15:26:39 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:37.905 15:26:39 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:37.905 15:26:39 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:37.905 15:26:39 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:37.905 15:26:39 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:37.905 15:26:39 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:37.905 15:26:39 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:37.905 15:26:39 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:37.905 15:26:39 -- setup/hugepages.sh@78 -- # return 0 00:03:37.905 15:26:39 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:03:37.905 15:26:39 -- setup/hugepages.sh@187 -- # setup output 00:03:37.905 15:26:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:37.905 15:26:39 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:38.476 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:38.476 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:38.476 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:38.476 15:26:39 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:03:38.476 15:26:39 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:38.476 15:26:39 -- setup/hugepages.sh@89 -- # local node 00:03:38.476 15:26:39 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:38.476 15:26:39 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:38.476 15:26:39 -- setup/hugepages.sh@92 -- # local surp 00:03:38.476 15:26:39 -- setup/hugepages.sh@93 -- # local resv 00:03:38.476 15:26:39 -- setup/hugepages.sh@94 -- # local anon 00:03:38.476 15:26:39 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:38.476 15:26:39 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:38.476 15:26:39 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:38.476 15:26:39 -- setup/common.sh@18 -- # local node= 00:03:38.476 15:26:39 -- setup/common.sh@19 -- # local var val 00:03:38.476 15:26:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.476 15:26:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.476 15:26:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.476 15:26:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.476 15:26:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.476 15:26:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.476 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.476 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.476 15:26:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8761736 kB' 'MemAvailable: 10546760 kB' 'Buffers: 2436 kB' 'Cached: 1998008 kB' 'SwapCached: 0 kB' 'Active: 851180 kB' 'Inactive: 1271972 kB' 'Active(anon): 133172 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1271972 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1508 kB' 'Writeback: 0 kB' 'AnonPages: 124504 kB' 'Mapped: 49016 kB' 'Shmem: 10464 kB' 'KReclaimable: 64412 kB' 'Slab: 138268 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 73856 kB' 'KernelStack: 6164 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 355560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:38.476 15:26:39 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.476 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.476 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.476 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.476 15:26:39 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.476 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.476 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.476 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.476 15:26:39 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.476 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.476 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.476 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.476 15:26:39 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.476 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.476 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.476 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.476 15:26:39 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.476 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.476 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.476 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.476 15:26:39 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.476 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.476 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.476 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.476 15:26:39 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.476 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.476 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.476 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.476 15:26:39 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.476 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.476 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.476 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.476 15:26:39 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.476 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.476 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.476 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.477 15:26:39 -- setup/common.sh@33 -- # echo 0 00:03:38.477 15:26:39 -- setup/common.sh@33 -- # return 0 00:03:38.477 15:26:39 -- setup/hugepages.sh@97 -- # anon=0 00:03:38.477 15:26:39 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:38.477 15:26:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.477 15:26:39 -- setup/common.sh@18 -- # local node= 00:03:38.477 15:26:39 -- setup/common.sh@19 -- # local var val 00:03:38.477 15:26:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.477 15:26:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.477 15:26:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.477 15:26:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.477 15:26:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.477 15:26:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.477 15:26:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8761736 kB' 'MemAvailable: 10546760 kB' 'Buffers: 2436 kB' 'Cached: 1998008 kB' 'SwapCached: 0 kB' 'Active: 850728 kB' 'Inactive: 1271972 kB' 'Active(anon): 132720 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1271972 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1508 kB' 'Writeback: 0 kB' 'AnonPages: 124092 kB' 'Mapped: 48872 kB' 'Shmem: 10464 kB' 'KReclaimable: 64412 kB' 'Slab: 138268 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 73856 kB' 'KernelStack: 6208 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 355560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.477 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.477 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.478 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.478 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.479 15:26:39 -- setup/common.sh@33 -- # echo 0 00:03:38.479 15:26:39 -- setup/common.sh@33 -- # return 0 00:03:38.479 15:26:39 -- setup/hugepages.sh@99 -- # surp=0 00:03:38.479 15:26:39 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:38.479 15:26:39 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:38.479 15:26:39 -- setup/common.sh@18 -- # local node= 00:03:38.479 15:26:39 -- setup/common.sh@19 -- # local var val 00:03:38.479 15:26:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.479 15:26:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.479 15:26:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.479 15:26:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.479 15:26:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.479 15:26:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.479 15:26:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8761736 kB' 'MemAvailable: 10546760 kB' 'Buffers: 2436 kB' 'Cached: 1998008 kB' 'SwapCached: 0 kB' 'Active: 850696 kB' 'Inactive: 1271972 kB' 'Active(anon): 132688 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1271972 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1508 kB' 'Writeback: 0 kB' 'AnonPages: 124092 kB' 'Mapped: 48872 kB' 'Shmem: 10464 kB' 'KReclaimable: 64412 kB' 'Slab: 138268 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 73856 kB' 'KernelStack: 6208 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 355560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.479 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.479 15:26:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.480 15:26:39 -- setup/common.sh@33 -- # echo 0 00:03:38.480 15:26:39 -- setup/common.sh@33 -- # return 0 00:03:38.480 15:26:39 -- setup/hugepages.sh@100 -- # resv=0 00:03:38.480 nr_hugepages=512 00:03:38.480 resv_hugepages=0 00:03:38.480 surplus_hugepages=0 00:03:38.480 anon_hugepages=0 00:03:38.480 15:26:39 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:38.480 15:26:39 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:38.480 15:26:39 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:38.480 15:26:39 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:38.480 15:26:39 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:38.480 15:26:39 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:38.480 15:26:39 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:38.480 15:26:39 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:38.480 15:26:39 -- setup/common.sh@18 -- # local node= 00:03:38.480 15:26:39 -- setup/common.sh@19 -- # local var val 00:03:38.480 15:26:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.480 15:26:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.480 15:26:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.480 15:26:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.480 15:26:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.480 15:26:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.480 15:26:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8761736 kB' 'MemAvailable: 10546760 kB' 'Buffers: 2436 kB' 'Cached: 1998008 kB' 'SwapCached: 0 kB' 'Active: 850652 kB' 'Inactive: 1271972 kB' 'Active(anon): 132644 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1271972 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1508 kB' 'Writeback: 0 kB' 'AnonPages: 124008 kB' 'Mapped: 48872 kB' 'Shmem: 10464 kB' 'KReclaimable: 64412 kB' 'Slab: 138268 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 73856 kB' 'KernelStack: 6192 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 355560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:38.480 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.480 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.481 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.481 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.482 15:26:39 -- setup/common.sh@33 -- # echo 512 00:03:38.482 15:26:39 -- setup/common.sh@33 -- # return 0 00:03:38.482 15:26:39 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:38.482 15:26:39 -- setup/hugepages.sh@112 -- # get_nodes 00:03:38.482 15:26:39 -- setup/hugepages.sh@27 -- # local node 00:03:38.482 15:26:39 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:38.482 15:26:39 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:38.482 15:26:39 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:38.482 15:26:39 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:38.482 15:26:39 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:38.482 15:26:39 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:38.482 15:26:39 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:38.482 15:26:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.482 15:26:39 -- setup/common.sh@18 -- # local node=0 00:03:38.482 15:26:39 -- setup/common.sh@19 -- # local var val 00:03:38.482 15:26:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.482 15:26:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.482 15:26:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:38.482 15:26:39 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:38.482 15:26:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.482 15:26:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.482 15:26:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8762088 kB' 'MemUsed: 3479876 kB' 'SwapCached: 0 kB' 'Active: 850696 kB' 'Inactive: 1271972 kB' 'Active(anon): 132688 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1271972 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1508 kB' 'Writeback: 0 kB' 'FilePages: 2000444 kB' 'Mapped: 48872 kB' 'AnonPages: 124052 kB' 'Shmem: 10464 kB' 'KernelStack: 6192 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64412 kB' 'Slab: 138268 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 73856 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.482 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.482 15:26:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.483 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.483 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.483 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.483 15:26:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.483 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.483 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.483 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.483 15:26:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.483 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.483 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.483 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.483 15:26:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.483 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.483 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.483 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.483 15:26:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.483 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.483 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.483 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.483 15:26:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.483 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.483 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.483 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.483 15:26:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.483 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.483 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.483 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.483 15:26:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.483 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.483 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.483 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.483 15:26:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.483 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.483 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.483 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.483 15:26:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.483 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.483 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.483 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.483 15:26:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.483 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.483 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.483 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.483 15:26:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.483 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.483 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.483 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.483 15:26:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.483 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.483 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.483 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.483 15:26:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.483 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.483 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.483 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.483 15:26:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.483 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.483 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.483 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.483 15:26:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.483 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.483 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.483 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.483 15:26:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.483 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.483 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.483 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.483 15:26:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.483 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.483 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.483 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.483 15:26:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.483 15:26:39 -- setup/common.sh@32 -- # continue 00:03:38.483 15:26:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.483 15:26:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.483 15:26:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.483 15:26:39 -- setup/common.sh@33 -- # echo 0 00:03:38.483 15:26:39 -- setup/common.sh@33 -- # return 0 00:03:38.483 node0=512 expecting 512 00:03:38.483 ************************************ 00:03:38.483 END TEST custom_alloc 00:03:38.483 ************************************ 00:03:38.483 15:26:39 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:38.483 15:26:39 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:38.483 15:26:39 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:38.483 15:26:39 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:38.483 15:26:39 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:38.483 15:26:39 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:38.483 00:03:38.483 real 0m0.543s 00:03:38.483 user 0m0.279s 00:03:38.483 sys 0m0.276s 00:03:38.483 15:26:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:38.483 15:26:39 -- common/autotest_common.sh@10 -- # set +x 00:03:38.483 15:26:39 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:38.483 15:26:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:38.483 15:26:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:38.483 15:26:39 -- common/autotest_common.sh@10 -- # set +x 00:03:38.742 ************************************ 00:03:38.742 START TEST no_shrink_alloc 00:03:38.742 ************************************ 00:03:38.742 15:26:39 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:03:38.742 15:26:39 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:38.742 15:26:39 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:38.742 15:26:39 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:38.742 15:26:39 -- setup/hugepages.sh@51 -- # shift 00:03:38.742 15:26:39 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:38.742 15:26:39 -- setup/hugepages.sh@52 -- # local node_ids 00:03:38.742 15:26:39 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:38.742 15:26:39 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:38.742 15:26:39 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:38.742 15:26:39 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:38.742 15:26:39 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:38.742 15:26:39 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:38.742 15:26:39 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:38.742 15:26:39 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:38.742 15:26:39 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:38.742 15:26:39 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:38.742 15:26:39 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:38.742 15:26:39 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:38.742 15:26:39 -- setup/hugepages.sh@73 -- # return 0 00:03:38.742 15:26:39 -- setup/hugepages.sh@198 -- # setup output 00:03:38.742 15:26:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:38.742 15:26:39 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:39.003 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:39.003 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:39.003 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:39.003 15:26:40 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:39.003 15:26:40 -- setup/hugepages.sh@89 -- # local node 00:03:39.003 15:26:40 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:39.003 15:26:40 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:39.003 15:26:40 -- setup/hugepages.sh@92 -- # local surp 00:03:39.003 15:26:40 -- setup/hugepages.sh@93 -- # local resv 00:03:39.003 15:26:40 -- setup/hugepages.sh@94 -- # local anon 00:03:39.003 15:26:40 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:39.003 15:26:40 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:39.003 15:26:40 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:39.003 15:26:40 -- setup/common.sh@18 -- # local node= 00:03:39.003 15:26:40 -- setup/common.sh@19 -- # local var val 00:03:39.003 15:26:40 -- setup/common.sh@20 -- # local mem_f mem 00:03:39.003 15:26:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.003 15:26:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.003 15:26:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.003 15:26:40 -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.003 15:26:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.003 15:26:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7710188 kB' 'MemAvailable: 9495216 kB' 'Buffers: 2436 kB' 'Cached: 1998012 kB' 'SwapCached: 0 kB' 'Active: 850984 kB' 'Inactive: 1271976 kB' 'Active(anon): 132976 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1271976 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1636 kB' 'Writeback: 0 kB' 'AnonPages: 124344 kB' 'Mapped: 49272 kB' 'Shmem: 10464 kB' 'KReclaimable: 64412 kB' 'Slab: 138192 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 73780 kB' 'KernelStack: 6180 kB' 'PageTables: 4376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 355560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.003 15:26:40 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.003 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.003 15:26:40 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.003 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.003 15:26:40 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.003 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.003 15:26:40 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.003 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.003 15:26:40 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.003 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.003 15:26:40 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.003 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.003 15:26:40 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.003 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.003 15:26:40 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.003 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.003 15:26:40 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.003 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.003 15:26:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.003 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.003 15:26:40 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.003 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.003 15:26:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.003 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.003 15:26:40 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.003 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.003 15:26:40 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.003 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.003 15:26:40 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.003 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.003 15:26:40 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.003 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.003 15:26:40 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.003 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.003 15:26:40 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.003 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.003 15:26:40 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.003 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.003 15:26:40 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.003 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.003 15:26:40 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.003 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.003 15:26:40 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.003 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.003 15:26:40 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.003 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.003 15:26:40 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.003 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.003 15:26:40 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.003 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.003 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.004 15:26:40 -- setup/common.sh@33 -- # echo 0 00:03:39.004 15:26:40 -- setup/common.sh@33 -- # return 0 00:03:39.004 15:26:40 -- setup/hugepages.sh@97 -- # anon=0 00:03:39.004 15:26:40 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:39.004 15:26:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:39.004 15:26:40 -- setup/common.sh@18 -- # local node= 00:03:39.004 15:26:40 -- setup/common.sh@19 -- # local var val 00:03:39.004 15:26:40 -- setup/common.sh@20 -- # local mem_f mem 00:03:39.004 15:26:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.004 15:26:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.004 15:26:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.004 15:26:40 -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.004 15:26:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.004 15:26:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7710188 kB' 'MemAvailable: 9495216 kB' 'Buffers: 2436 kB' 'Cached: 1998012 kB' 'SwapCached: 0 kB' 'Active: 850732 kB' 'Inactive: 1271976 kB' 'Active(anon): 132724 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1271976 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1636 kB' 'Writeback: 0 kB' 'AnonPages: 124128 kB' 'Mapped: 48880 kB' 'Shmem: 10464 kB' 'KReclaimable: 64412 kB' 'Slab: 138192 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 73780 kB' 'KernelStack: 6208 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 355560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.004 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.004 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.005 15:26:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.005 15:26:40 -- setup/common.sh@33 -- # echo 0 00:03:39.005 15:26:40 -- setup/common.sh@33 -- # return 0 00:03:39.005 15:26:40 -- setup/hugepages.sh@99 -- # surp=0 00:03:39.005 15:26:40 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:39.005 15:26:40 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:39.005 15:26:40 -- setup/common.sh@18 -- # local node= 00:03:39.005 15:26:40 -- setup/common.sh@19 -- # local var val 00:03:39.005 15:26:40 -- setup/common.sh@20 -- # local mem_f mem 00:03:39.005 15:26:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.005 15:26:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.005 15:26:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.005 15:26:40 -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.005 15:26:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.005 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.006 15:26:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7710188 kB' 'MemAvailable: 9495216 kB' 'Buffers: 2436 kB' 'Cached: 1998012 kB' 'SwapCached: 0 kB' 'Active: 850756 kB' 'Inactive: 1271976 kB' 'Active(anon): 132748 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1271976 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1636 kB' 'Writeback: 0 kB' 'AnonPages: 124112 kB' 'Mapped: 48880 kB' 'Shmem: 10464 kB' 'KReclaimable: 64412 kB' 'Slab: 138192 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 73780 kB' 'KernelStack: 6192 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 355560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.006 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.006 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.007 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.007 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.007 15:26:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.007 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.007 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.266 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.266 15:26:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.266 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.266 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.266 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.266 15:26:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.266 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.266 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.266 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.266 15:26:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.266 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.266 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.266 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.266 15:26:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.266 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.266 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.266 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.266 15:26:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.266 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.266 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.266 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.266 15:26:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.266 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.266 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.266 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.266 15:26:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.266 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.266 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.266 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.266 15:26:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.266 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.266 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.266 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.266 15:26:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.266 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.266 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.266 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.266 15:26:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.266 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.266 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.266 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.266 15:26:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.266 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.266 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.266 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.266 15:26:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.266 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.266 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.266 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.266 15:26:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.266 15:26:40 -- setup/common.sh@33 -- # echo 0 00:03:39.266 15:26:40 -- setup/common.sh@33 -- # return 0 00:03:39.266 nr_hugepages=1024 00:03:39.266 resv_hugepages=0 00:03:39.266 15:26:40 -- setup/hugepages.sh@100 -- # resv=0 00:03:39.266 15:26:40 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:39.266 15:26:40 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:39.266 surplus_hugepages=0 00:03:39.266 15:26:40 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:39.266 anon_hugepages=0 00:03:39.266 15:26:40 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:39.266 15:26:40 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:39.266 15:26:40 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:39.266 15:26:40 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:39.266 15:26:40 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:39.266 15:26:40 -- setup/common.sh@18 -- # local node= 00:03:39.266 15:26:40 -- setup/common.sh@19 -- # local var val 00:03:39.266 15:26:40 -- setup/common.sh@20 -- # local mem_f mem 00:03:39.266 15:26:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.266 15:26:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.266 15:26:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.266 15:26:40 -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.266 15:26:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.266 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.266 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.266 15:26:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7710188 kB' 'MemAvailable: 9495216 kB' 'Buffers: 2436 kB' 'Cached: 1998012 kB' 'SwapCached: 0 kB' 'Active: 850736 kB' 'Inactive: 1271976 kB' 'Active(anon): 132728 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1271976 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1636 kB' 'Writeback: 0 kB' 'AnonPages: 124132 kB' 'Mapped: 48880 kB' 'Shmem: 10464 kB' 'KReclaimable: 64412 kB' 'Slab: 138192 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 73780 kB' 'KernelStack: 6208 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 355560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:39.266 15:26:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.266 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.266 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.266 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.266 15:26:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.266 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.266 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.266 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.266 15:26:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.266 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.266 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.266 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.266 15:26:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.266 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.266 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.266 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.266 15:26:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.266 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.266 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.266 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.266 15:26:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.266 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.266 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.266 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.266 15:26:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.266 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.266 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.267 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.267 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.268 15:26:40 -- setup/common.sh@33 -- # echo 1024 00:03:39.268 15:26:40 -- setup/common.sh@33 -- # return 0 00:03:39.268 15:26:40 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:39.268 15:26:40 -- setup/hugepages.sh@112 -- # get_nodes 00:03:39.268 15:26:40 -- setup/hugepages.sh@27 -- # local node 00:03:39.268 15:26:40 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:39.268 15:26:40 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:39.268 15:26:40 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:39.268 15:26:40 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:39.268 15:26:40 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:39.268 15:26:40 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:39.268 15:26:40 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:39.268 15:26:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:39.268 15:26:40 -- setup/common.sh@18 -- # local node=0 00:03:39.268 15:26:40 -- setup/common.sh@19 -- # local var val 00:03:39.268 15:26:40 -- setup/common.sh@20 -- # local mem_f mem 00:03:39.268 15:26:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.268 15:26:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:39.268 15:26:40 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:39.268 15:26:40 -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.268 15:26:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.268 15:26:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7710188 kB' 'MemUsed: 4531776 kB' 'SwapCached: 0 kB' 'Active: 850756 kB' 'Inactive: 1271976 kB' 'Active(anon): 132748 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1271976 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1636 kB' 'Writeback: 0 kB' 'FilePages: 2000448 kB' 'Mapped: 48880 kB' 'AnonPages: 124116 kB' 'Shmem: 10464 kB' 'KernelStack: 6224 kB' 'PageTables: 4440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64412 kB' 'Slab: 138192 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 73780 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.268 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.268 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.269 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.269 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.269 15:26:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.269 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.269 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.269 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.269 15:26:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.269 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.269 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.269 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.269 15:26:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.269 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.269 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.269 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.269 15:26:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.269 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.269 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.269 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.269 15:26:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.269 15:26:40 -- setup/common.sh@33 -- # echo 0 00:03:39.269 15:26:40 -- setup/common.sh@33 -- # return 0 00:03:39.269 15:26:40 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:39.269 15:26:40 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:39.269 15:26:40 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:39.269 15:26:40 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:39.269 15:26:40 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:39.269 node0=1024 expecting 1024 00:03:39.269 15:26:40 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:39.269 15:26:40 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:39.269 15:26:40 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:39.269 15:26:40 -- setup/hugepages.sh@202 -- # setup output 00:03:39.269 15:26:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.269 15:26:40 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:39.528 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:39.528 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:39.528 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:39.528 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:39.528 15:26:40 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:39.528 15:26:40 -- setup/hugepages.sh@89 -- # local node 00:03:39.528 15:26:40 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:39.528 15:26:40 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:39.528 15:26:40 -- setup/hugepages.sh@92 -- # local surp 00:03:39.528 15:26:40 -- setup/hugepages.sh@93 -- # local resv 00:03:39.528 15:26:40 -- setup/hugepages.sh@94 -- # local anon 00:03:39.528 15:26:40 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:39.528 15:26:40 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:39.528 15:26:40 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:39.528 15:26:40 -- setup/common.sh@18 -- # local node= 00:03:39.528 15:26:40 -- setup/common.sh@19 -- # local var val 00:03:39.528 15:26:40 -- setup/common.sh@20 -- # local mem_f mem 00:03:39.528 15:26:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.528 15:26:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.528 15:26:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.528 15:26:40 -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.528 15:26:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.528 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.529 15:26:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7709324 kB' 'MemAvailable: 9494352 kB' 'Buffers: 2436 kB' 'Cached: 1998012 kB' 'SwapCached: 0 kB' 'Active: 851620 kB' 'Inactive: 1271976 kB' 'Active(anon): 133612 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1271976 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1636 kB' 'Writeback: 0 kB' 'AnonPages: 124732 kB' 'Mapped: 48936 kB' 'Shmem: 10464 kB' 'KReclaimable: 64412 kB' 'Slab: 138188 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 73776 kB' 'KernelStack: 6196 kB' 'PageTables: 4444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 355560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.529 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.529 15:26:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.530 15:26:40 -- setup/common.sh@33 -- # echo 0 00:03:39.530 15:26:40 -- setup/common.sh@33 -- # return 0 00:03:39.530 15:26:40 -- setup/hugepages.sh@97 -- # anon=0 00:03:39.530 15:26:40 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:39.530 15:26:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:39.530 15:26:40 -- setup/common.sh@18 -- # local node= 00:03:39.530 15:26:40 -- setup/common.sh@19 -- # local var val 00:03:39.530 15:26:40 -- setup/common.sh@20 -- # local mem_f mem 00:03:39.530 15:26:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.530 15:26:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.530 15:26:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.530 15:26:40 -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.530 15:26:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.530 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.530 15:26:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7709324 kB' 'MemAvailable: 9494352 kB' 'Buffers: 2436 kB' 'Cached: 1998012 kB' 'SwapCached: 0 kB' 'Active: 850872 kB' 'Inactive: 1271976 kB' 'Active(anon): 132864 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1271976 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1636 kB' 'Writeback: 0 kB' 'AnonPages: 124208 kB' 'Mapped: 48880 kB' 'Shmem: 10464 kB' 'KReclaimable: 64412 kB' 'Slab: 138188 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 73776 kB' 'KernelStack: 6192 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 355560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54548 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:39.530 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.530 15:26:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.530 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.530 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.530 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.530 15:26:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.530 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.530 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.530 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.530 15:26:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.530 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.530 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.530 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.530 15:26:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.530 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.530 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.530 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.530 15:26:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.530 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.530 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.530 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.530 15:26:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.530 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.530 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.530 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.530 15:26:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.530 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.530 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.530 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.530 15:26:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.530 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.530 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.530 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.530 15:26:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.530 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.530 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.530 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.530 15:26:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.530 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.530 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.530 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.530 15:26:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.530 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.530 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.530 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.530 15:26:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.530 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.530 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.530 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.530 15:26:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.530 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.530 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.530 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.530 15:26:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.530 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.530 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.530 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.530 15:26:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.530 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.530 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.530 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.530 15:26:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.530 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.530 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.530 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.530 15:26:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.530 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.530 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.530 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.530 15:26:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.530 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.530 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.530 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.530 15:26:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.530 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.530 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.530 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.530 15:26:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.530 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.530 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.530 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.530 15:26:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.530 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.530 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.530 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.530 15:26:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.530 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.530 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.530 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.530 15:26:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.531 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.531 15:26:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.531 15:26:40 -- setup/common.sh@33 -- # echo 0 00:03:39.531 15:26:40 -- setup/common.sh@33 -- # return 0 00:03:39.531 15:26:40 -- setup/hugepages.sh@99 -- # surp=0 00:03:39.531 15:26:40 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:39.531 15:26:40 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:39.531 15:26:40 -- setup/common.sh@18 -- # local node= 00:03:39.531 15:26:40 -- setup/common.sh@19 -- # local var val 00:03:39.531 15:26:40 -- setup/common.sh@20 -- # local mem_f mem 00:03:39.531 15:26:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.531 15:26:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.531 15:26:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.531 15:26:40 -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.531 15:26:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.532 15:26:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7709324 kB' 'MemAvailable: 9494352 kB' 'Buffers: 2436 kB' 'Cached: 1998012 kB' 'SwapCached: 0 kB' 'Active: 850992 kB' 'Inactive: 1271976 kB' 'Active(anon): 132984 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1271976 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1636 kB' 'Writeback: 0 kB' 'AnonPages: 124136 kB' 'Mapped: 48880 kB' 'Shmem: 10464 kB' 'KReclaimable: 64412 kB' 'Slab: 138188 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 73776 kB' 'KernelStack: 6192 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 355560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54548 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.532 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.532 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.533 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.533 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.533 15:26:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.533 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.533 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.533 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.533 15:26:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.533 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.533 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.533 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.533 15:26:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.533 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.533 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.533 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.533 15:26:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.533 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.533 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.533 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.533 15:26:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.533 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.533 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.533 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.533 15:26:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.533 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.533 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.533 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.533 15:26:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.533 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.533 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.533 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.533 15:26:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.533 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.533 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.533 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.533 15:26:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.533 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.533 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.533 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.793 15:26:40 -- setup/common.sh@33 -- # echo 0 00:03:39.793 15:26:40 -- setup/common.sh@33 -- # return 0 00:03:39.793 15:26:40 -- setup/hugepages.sh@100 -- # resv=0 00:03:39.793 15:26:40 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:39.793 nr_hugepages=1024 00:03:39.793 15:26:40 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:39.793 resv_hugepages=0 00:03:39.793 15:26:40 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:39.793 surplus_hugepages=0 00:03:39.793 15:26:40 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:39.793 anon_hugepages=0 00:03:39.793 15:26:40 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:39.793 15:26:40 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:39.793 15:26:40 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:39.793 15:26:40 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:39.793 15:26:40 -- setup/common.sh@18 -- # local node= 00:03:39.793 15:26:40 -- setup/common.sh@19 -- # local var val 00:03:39.793 15:26:40 -- setup/common.sh@20 -- # local mem_f mem 00:03:39.793 15:26:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.793 15:26:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.793 15:26:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.793 15:26:40 -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.793 15:26:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.793 15:26:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7709324 kB' 'MemAvailable: 9494352 kB' 'Buffers: 2436 kB' 'Cached: 1998012 kB' 'SwapCached: 0 kB' 'Active: 850796 kB' 'Inactive: 1271976 kB' 'Active(anon): 132788 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1271976 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1636 kB' 'Writeback: 0 kB' 'AnonPages: 124156 kB' 'Mapped: 48880 kB' 'Shmem: 10464 kB' 'KReclaimable: 64412 kB' 'Slab: 138188 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 73776 kB' 'KernelStack: 6176 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 355560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54548 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.793 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.793 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.794 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.794 15:26:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.794 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.794 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.794 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.794 15:26:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.794 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.794 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.794 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.794 15:26:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.794 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.794 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.794 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.794 15:26:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.794 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.794 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.794 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.794 15:26:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.794 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.794 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.794 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.794 15:26:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.794 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.794 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.794 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.794 15:26:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.794 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.794 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.794 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.794 15:26:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.794 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.794 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.794 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.794 15:26:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.794 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.794 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.794 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.794 15:26:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.794 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.794 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.794 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.794 15:26:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.794 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.794 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.794 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.794 15:26:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.794 15:26:40 -- setup/common.sh@32 -- # continue 00:03:39.794 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.794 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.794 15:26:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.794 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.794 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.794 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.794 15:26:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.794 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.794 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.794 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.794 15:26:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.794 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.794 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.794 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.794 15:26:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.794 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.794 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.794 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.794 15:26:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.794 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.794 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.794 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.794 15:26:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.794 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.794 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.794 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.794 15:26:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.794 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.794 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.794 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.794 15:26:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.794 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.794 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.794 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.794 15:26:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.794 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.794 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.794 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.794 15:26:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.794 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.794 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.794 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.794 15:26:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.794 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.794 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.794 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.794 15:26:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.794 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.794 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.794 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.794 15:26:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.794 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.794 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.794 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.794 15:26:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.794 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.794 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.794 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.794 15:26:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.794 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.794 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.794 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.794 15:26:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.794 15:26:41 -- setup/common.sh@33 -- # echo 1024 00:03:39.794 15:26:41 -- setup/common.sh@33 -- # return 0 00:03:39.794 15:26:41 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:39.794 15:26:41 -- setup/hugepages.sh@112 -- # get_nodes 00:03:39.794 15:26:41 -- setup/hugepages.sh@27 -- # local node 00:03:39.794 15:26:41 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:39.794 15:26:41 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:39.794 15:26:41 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:39.794 15:26:41 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:39.794 15:26:41 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:39.794 15:26:41 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:39.794 15:26:41 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:39.794 15:26:41 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:39.794 15:26:41 -- setup/common.sh@18 -- # local node=0 00:03:39.794 15:26:41 -- setup/common.sh@19 -- # local var val 00:03:39.794 15:26:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:39.794 15:26:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.794 15:26:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:39.794 15:26:41 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:39.794 15:26:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.794 15:26:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.794 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.794 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.794 15:26:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7709324 kB' 'MemUsed: 4532640 kB' 'SwapCached: 0 kB' 'Active: 850916 kB' 'Inactive: 1271976 kB' 'Active(anon): 132908 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1271976 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 500 kB' 'Writeback: 96 kB' 'FilePages: 2000448 kB' 'Mapped: 48880 kB' 'AnonPages: 124088 kB' 'Shmem: 10464 kB' 'KernelStack: 6176 kB' 'PageTables: 4360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64412 kB' 'Slab: 138188 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 73776 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:39.794 15:26:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.794 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.794 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.794 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.794 15:26:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.794 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.794 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.794 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.794 15:26:41 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.794 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.794 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.794 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.794 15:26:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.794 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.794 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.794 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # continue 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.795 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.795 15:26:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.795 15:26:41 -- setup/common.sh@33 -- # echo 0 00:03:39.795 15:26:41 -- setup/common.sh@33 -- # return 0 00:03:39.795 15:26:41 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:39.795 15:26:41 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:39.795 15:26:41 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:39.795 15:26:41 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:39.795 15:26:41 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:39.795 node0=1024 expecting 1024 00:03:39.795 15:26:41 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:39.795 00:03:39.795 real 0m1.060s 00:03:39.795 user 0m0.522s 00:03:39.795 sys 0m0.545s 00:03:39.795 15:26:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:39.795 15:26:41 -- common/autotest_common.sh@10 -- # set +x 00:03:39.795 ************************************ 00:03:39.795 END TEST no_shrink_alloc 00:03:39.795 ************************************ 00:03:39.795 15:26:41 -- setup/hugepages.sh@217 -- # clear_hp 00:03:39.795 15:26:41 -- setup/hugepages.sh@37 -- # local node hp 00:03:39.795 15:26:41 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:39.795 15:26:41 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:39.795 15:26:41 -- setup/hugepages.sh@41 -- # echo 0 00:03:39.795 15:26:41 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:39.795 15:26:41 -- setup/hugepages.sh@41 -- # echo 0 00:03:39.795 15:26:41 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:39.795 15:26:41 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:39.795 00:03:39.795 real 0m5.003s 00:03:39.795 user 0m2.367s 00:03:39.795 sys 0m2.625s 00:03:39.795 15:26:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:39.795 15:26:41 -- common/autotest_common.sh@10 -- # set +x 00:03:39.795 ************************************ 00:03:39.795 END TEST hugepages 00:03:39.795 ************************************ 00:03:39.795 15:26:41 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:39.795 15:26:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:39.795 15:26:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:39.795 15:26:41 -- common/autotest_common.sh@10 -- # set +x 00:03:39.795 ************************************ 00:03:39.795 START TEST driver 00:03:39.796 ************************************ 00:03:39.796 15:26:41 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:40.054 * Looking for test storage... 00:03:40.054 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:40.054 15:26:41 -- setup/driver.sh@68 -- # setup reset 00:03:40.054 15:26:41 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:40.054 15:26:41 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:40.621 15:26:41 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:40.621 15:26:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:40.621 15:26:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:40.621 15:26:41 -- common/autotest_common.sh@10 -- # set +x 00:03:40.621 ************************************ 00:03:40.621 START TEST guess_driver 00:03:40.621 ************************************ 00:03:40.621 15:26:41 -- common/autotest_common.sh@1111 -- # guess_driver 00:03:40.621 15:26:41 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:40.621 15:26:41 -- setup/driver.sh@47 -- # local fail=0 00:03:40.621 15:26:41 -- setup/driver.sh@49 -- # pick_driver 00:03:40.621 15:26:41 -- setup/driver.sh@36 -- # vfio 00:03:40.621 15:26:41 -- setup/driver.sh@21 -- # local iommu_grups 00:03:40.621 15:26:41 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:40.621 15:26:41 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:40.621 15:26:41 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:40.621 15:26:41 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:03:40.622 15:26:41 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:03:40.622 15:26:41 -- setup/driver.sh@32 -- # return 1 00:03:40.622 15:26:41 -- setup/driver.sh@38 -- # uio 00:03:40.622 15:26:41 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:03:40.622 15:26:41 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:03:40.622 15:26:41 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:03:40.622 15:26:41 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:03:40.622 15:26:41 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:03:40.622 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:03:40.622 15:26:41 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:03:40.622 15:26:41 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:03:40.622 15:26:41 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:40.622 Looking for driver=uio_pci_generic 00:03:40.622 15:26:41 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:03:40.622 15:26:41 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.622 15:26:41 -- setup/driver.sh@45 -- # setup output config 00:03:40.622 15:26:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.622 15:26:41 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:41.189 15:26:42 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:03:41.189 15:26:42 -- setup/driver.sh@58 -- # continue 00:03:41.189 15:26:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.447 15:26:42 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.447 15:26:42 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:41.447 15:26:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.447 15:26:42 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.447 15:26:42 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:41.447 15:26:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.447 15:26:42 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:41.447 15:26:42 -- setup/driver.sh@65 -- # setup reset 00:03:41.447 15:26:42 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:41.447 15:26:42 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:42.013 00:03:42.013 real 0m1.460s 00:03:42.013 user 0m0.567s 00:03:42.013 sys 0m0.905s 00:03:42.013 15:26:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:42.013 ************************************ 00:03:42.013 END TEST guess_driver 00:03:42.013 ************************************ 00:03:42.013 15:26:43 -- common/autotest_common.sh@10 -- # set +x 00:03:42.013 00:03:42.013 real 0m2.211s 00:03:42.013 user 0m0.821s 00:03:42.013 sys 0m1.442s 00:03:42.013 15:26:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:42.013 15:26:43 -- common/autotest_common.sh@10 -- # set +x 00:03:42.013 ************************************ 00:03:42.013 END TEST driver 00:03:42.013 ************************************ 00:03:42.013 15:26:43 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:42.013 15:26:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:42.013 15:26:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:42.013 15:26:43 -- common/autotest_common.sh@10 -- # set +x 00:03:42.271 ************************************ 00:03:42.271 START TEST devices 00:03:42.271 ************************************ 00:03:42.271 15:26:43 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:42.271 * Looking for test storage... 00:03:42.271 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:42.271 15:26:43 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:42.271 15:26:43 -- setup/devices.sh@192 -- # setup reset 00:03:42.271 15:26:43 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:42.271 15:26:43 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:43.207 15:26:44 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:43.207 15:26:44 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:43.207 15:26:44 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:43.207 15:26:44 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:43.207 15:26:44 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:43.207 15:26:44 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:43.207 15:26:44 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:43.207 15:26:44 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:43.207 15:26:44 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:43.207 15:26:44 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:43.207 15:26:44 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n2 00:03:43.207 15:26:44 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:03:43.207 15:26:44 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:03:43.207 15:26:44 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:43.207 15:26:44 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:43.207 15:26:44 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n3 00:03:43.207 15:26:44 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:03:43.207 15:26:44 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:03:43.207 15:26:44 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:43.207 15:26:44 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:43.207 15:26:44 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:03:43.207 15:26:44 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:03:43.207 15:26:44 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:43.207 15:26:44 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:43.207 15:26:44 -- setup/devices.sh@196 -- # blocks=() 00:03:43.207 15:26:44 -- setup/devices.sh@196 -- # declare -a blocks 00:03:43.207 15:26:44 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:43.207 15:26:44 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:43.207 15:26:44 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:43.207 15:26:44 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:43.207 15:26:44 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:43.207 15:26:44 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:43.207 15:26:44 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:43.207 15:26:44 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:43.207 15:26:44 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:43.207 15:26:44 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:43.207 15:26:44 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:03:43.207 No valid GPT data, bailing 00:03:43.207 15:26:44 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:43.207 15:26:44 -- scripts/common.sh@391 -- # pt= 00:03:43.207 15:26:44 -- scripts/common.sh@392 -- # return 1 00:03:43.207 15:26:44 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:43.207 15:26:44 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:43.207 15:26:44 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:43.207 15:26:44 -- setup/common.sh@80 -- # echo 4294967296 00:03:43.207 15:26:44 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:43.207 15:26:44 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:43.207 15:26:44 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:43.207 15:26:44 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:43.207 15:26:44 -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:03:43.207 15:26:44 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:43.207 15:26:44 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:43.207 15:26:44 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:43.207 15:26:44 -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:03:43.207 15:26:44 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:03:43.207 15:26:44 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:03:43.207 No valid GPT data, bailing 00:03:43.207 15:26:44 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:03:43.207 15:26:44 -- scripts/common.sh@391 -- # pt= 00:03:43.207 15:26:44 -- scripts/common.sh@392 -- # return 1 00:03:43.207 15:26:44 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:03:43.207 15:26:44 -- setup/common.sh@76 -- # local dev=nvme0n2 00:03:43.207 15:26:44 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:03:43.207 15:26:44 -- setup/common.sh@80 -- # echo 4294967296 00:03:43.207 15:26:44 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:43.207 15:26:44 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:43.207 15:26:44 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:43.207 15:26:44 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:43.207 15:26:44 -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:03:43.207 15:26:44 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:43.207 15:26:44 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:43.207 15:26:44 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:43.207 15:26:44 -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:03:43.207 15:26:44 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:03:43.207 15:26:44 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:03:43.207 No valid GPT data, bailing 00:03:43.207 15:26:44 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:03:43.207 15:26:44 -- scripts/common.sh@391 -- # pt= 00:03:43.207 15:26:44 -- scripts/common.sh@392 -- # return 1 00:03:43.207 15:26:44 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:03:43.207 15:26:44 -- setup/common.sh@76 -- # local dev=nvme0n3 00:03:43.207 15:26:44 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:03:43.207 15:26:44 -- setup/common.sh@80 -- # echo 4294967296 00:03:43.207 15:26:44 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:43.207 15:26:44 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:43.207 15:26:44 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:43.207 15:26:44 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:43.207 15:26:44 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:03:43.207 15:26:44 -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:43.207 15:26:44 -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:03:43.207 15:26:44 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:43.207 15:26:44 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:03:43.207 15:26:44 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:03:43.207 15:26:44 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:03:43.466 No valid GPT data, bailing 00:03:43.466 15:26:44 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:43.466 15:26:44 -- scripts/common.sh@391 -- # pt= 00:03:43.466 15:26:44 -- scripts/common.sh@392 -- # return 1 00:03:43.466 15:26:44 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:03:43.466 15:26:44 -- setup/common.sh@76 -- # local dev=nvme1n1 00:03:43.466 15:26:44 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:03:43.466 15:26:44 -- setup/common.sh@80 -- # echo 5368709120 00:03:43.466 15:26:44 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:03:43.466 15:26:44 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:43.466 15:26:44 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:03:43.466 15:26:44 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:03:43.466 15:26:44 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:43.466 15:26:44 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:43.466 15:26:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:43.466 15:26:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:43.466 15:26:44 -- common/autotest_common.sh@10 -- # set +x 00:03:43.466 ************************************ 00:03:43.466 START TEST nvme_mount 00:03:43.466 ************************************ 00:03:43.466 15:26:44 -- common/autotest_common.sh@1111 -- # nvme_mount 00:03:43.466 15:26:44 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:43.466 15:26:44 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:43.466 15:26:44 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:43.466 15:26:44 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:43.466 15:26:44 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:43.466 15:26:44 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:43.466 15:26:44 -- setup/common.sh@40 -- # local part_no=1 00:03:43.466 15:26:44 -- setup/common.sh@41 -- # local size=1073741824 00:03:43.466 15:26:44 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:43.466 15:26:44 -- setup/common.sh@44 -- # parts=() 00:03:43.466 15:26:44 -- setup/common.sh@44 -- # local parts 00:03:43.466 15:26:44 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:43.466 15:26:44 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:43.466 15:26:44 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:43.466 15:26:44 -- setup/common.sh@46 -- # (( part++ )) 00:03:43.466 15:26:44 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:43.466 15:26:44 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:43.466 15:26:44 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:43.466 15:26:44 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:44.402 Creating new GPT entries in memory. 00:03:44.402 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:44.402 other utilities. 00:03:44.402 15:26:45 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:44.402 15:26:45 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:44.402 15:26:45 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:44.402 15:26:45 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:44.402 15:26:45 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:03:45.779 Creating new GPT entries in memory. 00:03:45.779 The operation has completed successfully. 00:03:45.779 15:26:46 -- setup/common.sh@57 -- # (( part++ )) 00:03:45.779 15:26:46 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:45.779 15:26:46 -- setup/common.sh@62 -- # wait 56473 00:03:45.779 15:26:46 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:45.779 15:26:46 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:03:45.779 15:26:46 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:45.779 15:26:46 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:45.779 15:26:46 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:45.779 15:26:46 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:45.779 15:26:46 -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:45.779 15:26:46 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:45.779 15:26:46 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:45.779 15:26:46 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:45.779 15:26:46 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:45.779 15:26:46 -- setup/devices.sh@53 -- # local found=0 00:03:45.779 15:26:46 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:45.779 15:26:46 -- setup/devices.sh@56 -- # : 00:03:45.779 15:26:46 -- setup/devices.sh@59 -- # local pci status 00:03:45.779 15:26:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.779 15:26:46 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:45.779 15:26:46 -- setup/devices.sh@47 -- # setup output config 00:03:45.779 15:26:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.779 15:26:46 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:45.779 15:26:47 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:45.779 15:26:47 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:45.779 15:26:47 -- setup/devices.sh@63 -- # found=1 00:03:45.779 15:26:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.779 15:26:47 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:45.779 15:26:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.038 15:26:47 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:46.038 15:26:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.038 15:26:47 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:46.038 15:26:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.038 15:26:47 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:46.038 15:26:47 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:46.038 15:26:47 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:46.038 15:26:47 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:46.038 15:26:47 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:46.038 15:26:47 -- setup/devices.sh@110 -- # cleanup_nvme 00:03:46.038 15:26:47 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:46.038 15:26:47 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:46.038 15:26:47 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:46.038 15:26:47 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:46.038 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:46.038 15:26:47 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:46.038 15:26:47 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:46.296 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:03:46.296 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:03:46.296 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:46.296 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:46.296 15:26:47 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:03:46.296 15:26:47 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:03:46.296 15:26:47 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:46.555 15:26:47 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:46.555 15:26:47 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:46.555 15:26:47 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:46.555 15:26:47 -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:46.555 15:26:47 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:46.555 15:26:47 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:46.555 15:26:47 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:46.555 15:26:47 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:46.555 15:26:47 -- setup/devices.sh@53 -- # local found=0 00:03:46.555 15:26:47 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:46.555 15:26:47 -- setup/devices.sh@56 -- # : 00:03:46.555 15:26:47 -- setup/devices.sh@59 -- # local pci status 00:03:46.555 15:26:47 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:46.555 15:26:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.555 15:26:47 -- setup/devices.sh@47 -- # setup output config 00:03:46.555 15:26:47 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.555 15:26:47 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:46.555 15:26:47 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:46.555 15:26:47 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:46.555 15:26:47 -- setup/devices.sh@63 -- # found=1 00:03:46.555 15:26:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.555 15:26:47 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:46.555 15:26:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.813 15:26:48 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:46.813 15:26:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.813 15:26:48 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:46.813 15:26:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.123 15:26:48 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:47.123 15:26:48 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:47.123 15:26:48 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:47.123 15:26:48 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:47.123 15:26:48 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:47.123 15:26:48 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:47.123 15:26:48 -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:03:47.123 15:26:48 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:47.123 15:26:48 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:47.123 15:26:48 -- setup/devices.sh@50 -- # local mount_point= 00:03:47.123 15:26:48 -- setup/devices.sh@51 -- # local test_file= 00:03:47.123 15:26:48 -- setup/devices.sh@53 -- # local found=0 00:03:47.123 15:26:48 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:47.123 15:26:48 -- setup/devices.sh@59 -- # local pci status 00:03:47.123 15:26:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.123 15:26:48 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:47.123 15:26:48 -- setup/devices.sh@47 -- # setup output config 00:03:47.123 15:26:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.123 15:26:48 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:47.396 15:26:48 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:47.396 15:26:48 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:47.396 15:26:48 -- setup/devices.sh@63 -- # found=1 00:03:47.396 15:26:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.396 15:26:48 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:47.396 15:26:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.396 15:26:48 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:47.396 15:26:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.655 15:26:48 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:47.655 15:26:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.655 15:26:48 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:47.655 15:26:48 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:47.655 15:26:48 -- setup/devices.sh@68 -- # return 0 00:03:47.655 15:26:48 -- setup/devices.sh@128 -- # cleanup_nvme 00:03:47.655 15:26:48 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:47.655 15:26:48 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:47.655 15:26:48 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:47.655 15:26:48 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:47.655 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:47.655 00:03:47.655 real 0m4.136s 00:03:47.655 user 0m0.717s 00:03:47.655 sys 0m1.138s 00:03:47.655 15:26:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:47.655 ************************************ 00:03:47.655 END TEST nvme_mount 00:03:47.655 15:26:48 -- common/autotest_common.sh@10 -- # set +x 00:03:47.655 ************************************ 00:03:47.655 15:26:48 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:47.655 15:26:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:47.655 15:26:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:47.655 15:26:48 -- common/autotest_common.sh@10 -- # set +x 00:03:47.655 ************************************ 00:03:47.655 START TEST dm_mount 00:03:47.655 ************************************ 00:03:47.655 15:26:49 -- common/autotest_common.sh@1111 -- # dm_mount 00:03:47.655 15:26:49 -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:47.655 15:26:49 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:47.655 15:26:49 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:47.655 15:26:49 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:47.655 15:26:49 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:47.655 15:26:49 -- setup/common.sh@40 -- # local part_no=2 00:03:47.655 15:26:49 -- setup/common.sh@41 -- # local size=1073741824 00:03:47.655 15:26:49 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:47.655 15:26:49 -- setup/common.sh@44 -- # parts=() 00:03:47.655 15:26:49 -- setup/common.sh@44 -- # local parts 00:03:47.655 15:26:49 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:47.655 15:26:49 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:47.655 15:26:49 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:47.655 15:26:49 -- setup/common.sh@46 -- # (( part++ )) 00:03:47.655 15:26:49 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:47.655 15:26:49 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:47.655 15:26:49 -- setup/common.sh@46 -- # (( part++ )) 00:03:47.655 15:26:49 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:47.655 15:26:49 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:47.655 15:26:49 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:47.655 15:26:49 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:49.031 Creating new GPT entries in memory. 00:03:49.031 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:49.031 other utilities. 00:03:49.031 15:26:50 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:49.031 15:26:50 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:49.031 15:26:50 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:49.031 15:26:50 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:49.031 15:26:50 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:03:49.968 Creating new GPT entries in memory. 00:03:49.968 The operation has completed successfully. 00:03:49.968 15:26:51 -- setup/common.sh@57 -- # (( part++ )) 00:03:49.968 15:26:51 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:49.968 15:26:51 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:49.968 15:26:51 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:49.968 15:26:51 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:03:50.904 The operation has completed successfully. 00:03:50.904 15:26:52 -- setup/common.sh@57 -- # (( part++ )) 00:03:50.904 15:26:52 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:50.904 15:26:52 -- setup/common.sh@62 -- # wait 56937 00:03:50.904 15:26:52 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:50.904 15:26:52 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:50.904 15:26:52 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:50.904 15:26:52 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:50.904 15:26:52 -- setup/devices.sh@160 -- # for t in {1..5} 00:03:50.904 15:26:52 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:50.904 15:26:52 -- setup/devices.sh@161 -- # break 00:03:50.904 15:26:52 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:50.904 15:26:52 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:50.904 15:26:52 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:50.904 15:26:52 -- setup/devices.sh@166 -- # dm=dm-0 00:03:50.904 15:26:52 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:50.904 15:26:52 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:50.904 15:26:52 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:50.904 15:26:52 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:03:50.904 15:26:52 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:50.904 15:26:52 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:50.904 15:26:52 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:50.904 15:26:52 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:50.904 15:26:52 -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:50.904 15:26:52 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:50.904 15:26:52 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:50.904 15:26:52 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:50.904 15:26:52 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:50.905 15:26:52 -- setup/devices.sh@53 -- # local found=0 00:03:50.905 15:26:52 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:03:50.905 15:26:52 -- setup/devices.sh@56 -- # : 00:03:50.905 15:26:52 -- setup/devices.sh@59 -- # local pci status 00:03:50.905 15:26:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.905 15:26:52 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:50.905 15:26:52 -- setup/devices.sh@47 -- # setup output config 00:03:50.905 15:26:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.905 15:26:52 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:51.163 15:26:52 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:51.163 15:26:52 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:51.163 15:26:52 -- setup/devices.sh@63 -- # found=1 00:03:51.163 15:26:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.163 15:26:52 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:51.163 15:26:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.423 15:26:52 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:51.423 15:26:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.423 15:26:52 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:51.423 15:26:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.423 15:26:52 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:51.423 15:26:52 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:03:51.423 15:26:52 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:51.423 15:26:52 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:03:51.423 15:26:52 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:51.423 15:26:52 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:51.423 15:26:52 -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:51.423 15:26:52 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:51.423 15:26:52 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:51.423 15:26:52 -- setup/devices.sh@50 -- # local mount_point= 00:03:51.423 15:26:52 -- setup/devices.sh@51 -- # local test_file= 00:03:51.423 15:26:52 -- setup/devices.sh@53 -- # local found=0 00:03:51.423 15:26:52 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:51.423 15:26:52 -- setup/devices.sh@59 -- # local pci status 00:03:51.423 15:26:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.423 15:26:52 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:51.423 15:26:52 -- setup/devices.sh@47 -- # setup output config 00:03:51.423 15:26:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.423 15:26:52 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:51.683 15:26:53 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:51.683 15:26:53 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:51.683 15:26:53 -- setup/devices.sh@63 -- # found=1 00:03:51.683 15:26:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.683 15:26:53 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:51.683 15:26:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.942 15:26:53 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:51.942 15:26:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.942 15:26:53 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:51.942 15:26:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.942 15:26:53 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:51.942 15:26:53 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:51.942 15:26:53 -- setup/devices.sh@68 -- # return 0 00:03:51.942 15:26:53 -- setup/devices.sh@187 -- # cleanup_dm 00:03:51.942 15:26:53 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:51.942 15:26:53 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:51.942 15:26:53 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:51.942 15:26:53 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:51.942 15:26:53 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:51.942 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:51.942 15:26:53 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:51.942 15:26:53 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:51.942 00:03:51.942 real 0m4.333s 00:03:51.942 user 0m0.466s 00:03:51.942 sys 0m0.765s 00:03:51.942 ************************************ 00:03:51.942 15:26:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:51.942 15:26:53 -- common/autotest_common.sh@10 -- # set +x 00:03:51.942 END TEST dm_mount 00:03:51.942 ************************************ 00:03:52.201 15:26:53 -- setup/devices.sh@1 -- # cleanup 00:03:52.201 15:26:53 -- setup/devices.sh@11 -- # cleanup_nvme 00:03:52.201 15:26:53 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:52.201 15:26:53 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:52.201 15:26:53 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:52.201 15:26:53 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:52.201 15:26:53 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:52.460 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:03:52.460 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:03:52.460 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:52.460 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:52.460 15:26:53 -- setup/devices.sh@12 -- # cleanup_dm 00:03:52.460 15:26:53 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:52.460 15:26:53 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:52.460 15:26:53 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:52.460 15:26:53 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:52.460 15:26:53 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:52.460 15:26:53 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:52.460 ************************************ 00:03:52.460 END TEST devices 00:03:52.460 ************************************ 00:03:52.460 00:03:52.460 real 0m10.189s 00:03:52.460 user 0m1.889s 00:03:52.460 sys 0m2.588s 00:03:52.460 15:26:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:52.460 15:26:53 -- common/autotest_common.sh@10 -- # set +x 00:03:52.460 00:03:52.460 real 0m23.085s 00:03:52.460 user 0m7.476s 00:03:52.460 sys 0m9.797s 00:03:52.460 15:26:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:52.460 15:26:53 -- common/autotest_common.sh@10 -- # set +x 00:03:52.460 ************************************ 00:03:52.460 END TEST setup.sh 00:03:52.460 ************************************ 00:03:52.460 15:26:53 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:53.026 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:53.026 Hugepages 00:03:53.026 node hugesize free / total 00:03:53.026 node0 1048576kB 0 / 0 00:03:53.284 node0 2048kB 2048 / 2048 00:03:53.284 00:03:53.284 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:53.284 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:53.284 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:03:53.284 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:03:53.284 15:26:54 -- spdk/autotest.sh@130 -- # uname -s 00:03:53.284 15:26:54 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:53.284 15:26:54 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:53.284 15:26:54 -- common/autotest_common.sh@1517 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:54.218 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:54.218 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:54.218 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:54.218 15:26:55 -- common/autotest_common.sh@1518 -- # sleep 1 00:03:55.592 15:26:56 -- common/autotest_common.sh@1519 -- # bdfs=() 00:03:55.592 15:26:56 -- common/autotest_common.sh@1519 -- # local bdfs 00:03:55.592 15:26:56 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:55.592 15:26:56 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:55.592 15:26:56 -- common/autotest_common.sh@1499 -- # bdfs=() 00:03:55.592 15:26:56 -- common/autotest_common.sh@1499 -- # local bdfs 00:03:55.592 15:26:56 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:55.592 15:26:56 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:55.592 15:26:56 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:03:55.592 15:26:56 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:03:55.592 15:26:56 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:03:55.592 15:26:56 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:55.592 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:55.592 Waiting for block devices as requested 00:03:55.850 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:03:55.850 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:03:55.850 15:26:57 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:55.850 15:26:57 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:03:55.850 15:26:57 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:55.850 15:26:57 -- common/autotest_common.sh@1488 -- # grep 0000:00:10.0/nvme/nvme 00:03:55.850 15:26:57 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:55.851 15:26:57 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:03:55.851 15:26:57 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:55.851 15:26:57 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme1 00:03:55.851 15:26:57 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:03:55.851 15:26:57 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:03:55.851 15:26:57 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:55.851 15:26:57 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:03:55.851 15:26:57 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:55.851 15:26:57 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:55.851 15:26:57 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:55.851 15:26:57 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:55.851 15:26:57 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:03:55.851 15:26:57 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:55.851 15:26:57 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:55.851 15:26:57 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:55.851 15:26:57 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:55.851 15:26:57 -- common/autotest_common.sh@1543 -- # continue 00:03:55.851 15:26:57 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:55.851 15:26:57 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:03:55.851 15:26:57 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:55.851 15:26:57 -- common/autotest_common.sh@1488 -- # grep 0000:00:11.0/nvme/nvme 00:03:55.851 15:26:57 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:55.851 15:26:57 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:03:55.851 15:26:57 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:55.851 15:26:57 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:03:55.851 15:26:57 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:55.851 15:26:57 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:55.851 15:26:57 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:55.851 15:26:57 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:55.851 15:26:57 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:55.851 15:26:57 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:55.851 15:26:57 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:55.851 15:26:57 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:55.851 15:26:57 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:55.851 15:26:57 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:55.851 15:26:57 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:55.851 15:26:57 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:55.851 15:26:57 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:55.851 15:26:57 -- common/autotest_common.sh@1543 -- # continue 00:03:55.851 15:26:57 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:55.851 15:26:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:55.851 15:26:57 -- common/autotest_common.sh@10 -- # set +x 00:03:56.109 15:26:57 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:56.109 15:26:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:56.109 15:26:57 -- common/autotest_common.sh@10 -- # set +x 00:03:56.109 15:26:57 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:56.675 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:56.675 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:56.933 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:56.933 15:26:58 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:56.933 15:26:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:56.933 15:26:58 -- common/autotest_common.sh@10 -- # set +x 00:03:56.933 15:26:58 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:56.933 15:26:58 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:03:56.933 15:26:58 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:03:56.933 15:26:58 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:56.933 15:26:58 -- common/autotest_common.sh@1563 -- # local bdfs 00:03:56.933 15:26:58 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:03:56.933 15:26:58 -- common/autotest_common.sh@1499 -- # bdfs=() 00:03:56.933 15:26:58 -- common/autotest_common.sh@1499 -- # local bdfs 00:03:56.933 15:26:58 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:56.933 15:26:58 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:56.933 15:26:58 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:03:56.933 15:26:58 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:03:56.933 15:26:58 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:03:56.933 15:26:58 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:03:56.933 15:26:58 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:03:56.933 15:26:58 -- common/autotest_common.sh@1566 -- # device=0x0010 00:03:56.933 15:26:58 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:56.933 15:26:58 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:03:56.934 15:26:58 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:03:56.934 15:26:58 -- common/autotest_common.sh@1566 -- # device=0x0010 00:03:56.934 15:26:58 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:56.934 15:26:58 -- common/autotest_common.sh@1572 -- # printf '%s\n' 00:03:56.934 15:26:58 -- common/autotest_common.sh@1578 -- # [[ -z '' ]] 00:03:56.934 15:26:58 -- common/autotest_common.sh@1579 -- # return 0 00:03:56.934 15:26:58 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:56.934 15:26:58 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:56.934 15:26:58 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:56.934 15:26:58 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:56.934 15:26:58 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:56.934 15:26:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:56.934 15:26:58 -- common/autotest_common.sh@10 -- # set +x 00:03:56.934 15:26:58 -- spdk/autotest.sh@164 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:56.934 15:26:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:56.934 15:26:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:56.934 15:26:58 -- common/autotest_common.sh@10 -- # set +x 00:03:57.205 ************************************ 00:03:57.205 START TEST env 00:03:57.205 ************************************ 00:03:57.205 15:26:58 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:57.205 * Looking for test storage... 00:03:57.205 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:03:57.206 15:26:58 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:57.206 15:26:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:57.206 15:26:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:57.206 15:26:58 -- common/autotest_common.sh@10 -- # set +x 00:03:57.206 ************************************ 00:03:57.206 START TEST env_memory 00:03:57.206 ************************************ 00:03:57.206 15:26:58 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:57.206 00:03:57.206 00:03:57.206 CUnit - A unit testing framework for C - Version 2.1-3 00:03:57.206 http://cunit.sourceforge.net/ 00:03:57.206 00:03:57.206 00:03:57.206 Suite: memory 00:03:57.206 Test: alloc and free memory map ...[2024-04-17 15:26:58.609005] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:57.206 passed 00:03:57.465 Test: mem map translation ...[2024-04-17 15:26:58.644626] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:57.465 [2024-04-17 15:26:58.644733] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:57.465 [2024-04-17 15:26:58.644829] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:57.465 [2024-04-17 15:26:58.644844] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:57.465 passed 00:03:57.465 Test: mem map registration ...[2024-04-17 15:26:58.711614] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:57.465 [2024-04-17 15:26:58.711688] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:57.465 passed 00:03:57.465 Test: mem map adjacent registrations ...passed 00:03:57.465 00:03:57.465 Run Summary: Type Total Ran Passed Failed Inactive 00:03:57.465 suites 1 1 n/a 0 0 00:03:57.465 tests 4 4 4 0 0 00:03:57.465 asserts 152 152 152 0 n/a 00:03:57.465 00:03:57.465 Elapsed time = 0.223 seconds 00:03:57.465 00:03:57.465 real 0m0.235s 00:03:57.465 user 0m0.217s 00:03:57.465 sys 0m0.017s 00:03:57.465 15:26:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:57.465 15:26:58 -- common/autotest_common.sh@10 -- # set +x 00:03:57.465 ************************************ 00:03:57.465 END TEST env_memory 00:03:57.465 ************************************ 00:03:57.465 15:26:58 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:57.465 15:26:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:57.465 15:26:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:57.465 15:26:58 -- common/autotest_common.sh@10 -- # set +x 00:03:57.724 ************************************ 00:03:57.724 START TEST env_vtophys 00:03:57.724 ************************************ 00:03:57.724 15:26:58 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:57.724 EAL: lib.eal log level changed from notice to debug 00:03:57.724 EAL: Detected lcore 0 as core 0 on socket 0 00:03:57.724 EAL: Detected lcore 1 as core 0 on socket 0 00:03:57.724 EAL: Detected lcore 2 as core 0 on socket 0 00:03:57.724 EAL: Detected lcore 3 as core 0 on socket 0 00:03:57.724 EAL: Detected lcore 4 as core 0 on socket 0 00:03:57.724 EAL: Detected lcore 5 as core 0 on socket 0 00:03:57.724 EAL: Detected lcore 6 as core 0 on socket 0 00:03:57.724 EAL: Detected lcore 7 as core 0 on socket 0 00:03:57.724 EAL: Detected lcore 8 as core 0 on socket 0 00:03:57.724 EAL: Detected lcore 9 as core 0 on socket 0 00:03:57.724 EAL: Maximum logical cores by configuration: 128 00:03:57.724 EAL: Detected CPU lcores: 10 00:03:57.724 EAL: Detected NUMA nodes: 1 00:03:57.724 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:03:57.724 EAL: Detected shared linkage of DPDK 00:03:57.724 EAL: No shared files mode enabled, IPC will be disabled 00:03:57.724 EAL: Selected IOVA mode 'PA' 00:03:57.724 EAL: Probing VFIO support... 00:03:57.724 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:57.724 EAL: VFIO modules not loaded, skipping VFIO support... 00:03:57.724 EAL: Ask a virtual area of 0x2e000 bytes 00:03:57.724 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:57.724 EAL: Setting up physically contiguous memory... 00:03:57.724 EAL: Setting maximum number of open files to 524288 00:03:57.724 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:57.724 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:57.724 EAL: Ask a virtual area of 0x61000 bytes 00:03:57.724 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:57.724 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:57.724 EAL: Ask a virtual area of 0x400000000 bytes 00:03:57.724 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:57.724 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:57.724 EAL: Ask a virtual area of 0x61000 bytes 00:03:57.724 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:57.724 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:57.724 EAL: Ask a virtual area of 0x400000000 bytes 00:03:57.724 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:57.724 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:57.724 EAL: Ask a virtual area of 0x61000 bytes 00:03:57.724 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:57.724 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:57.724 EAL: Ask a virtual area of 0x400000000 bytes 00:03:57.724 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:57.724 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:57.724 EAL: Ask a virtual area of 0x61000 bytes 00:03:57.724 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:57.724 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:57.724 EAL: Ask a virtual area of 0x400000000 bytes 00:03:57.724 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:57.724 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:57.724 EAL: Hugepages will be freed exactly as allocated. 00:03:57.724 EAL: No shared files mode enabled, IPC is disabled 00:03:57.724 EAL: No shared files mode enabled, IPC is disabled 00:03:57.724 EAL: TSC frequency is ~2200000 KHz 00:03:57.724 EAL: Main lcore 0 is ready (tid=7fcd26623a00;cpuset=[0]) 00:03:57.724 EAL: Trying to obtain current memory policy. 00:03:57.724 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.724 EAL: Restoring previous memory policy: 0 00:03:57.724 EAL: request: mp_malloc_sync 00:03:57.724 EAL: No shared files mode enabled, IPC is disabled 00:03:57.724 EAL: Heap on socket 0 was expanded by 2MB 00:03:57.724 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:57.724 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:57.724 EAL: Mem event callback 'spdk:(nil)' registered 00:03:57.724 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:03:57.724 00:03:57.724 00:03:57.724 CUnit - A unit testing framework for C - Version 2.1-3 00:03:57.724 http://cunit.sourceforge.net/ 00:03:57.724 00:03:57.724 00:03:57.724 Suite: components_suite 00:03:57.724 Test: vtophys_malloc_test ...passed 00:03:57.724 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:57.724 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.724 EAL: Restoring previous memory policy: 4 00:03:57.724 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.724 EAL: request: mp_malloc_sync 00:03:57.724 EAL: No shared files mode enabled, IPC is disabled 00:03:57.724 EAL: Heap on socket 0 was expanded by 4MB 00:03:57.724 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.724 EAL: request: mp_malloc_sync 00:03:57.724 EAL: No shared files mode enabled, IPC is disabled 00:03:57.724 EAL: Heap on socket 0 was shrunk by 4MB 00:03:57.724 EAL: Trying to obtain current memory policy. 00:03:57.724 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.724 EAL: Restoring previous memory policy: 4 00:03:57.724 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.724 EAL: request: mp_malloc_sync 00:03:57.725 EAL: No shared files mode enabled, IPC is disabled 00:03:57.725 EAL: Heap on socket 0 was expanded by 6MB 00:03:57.725 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.725 EAL: request: mp_malloc_sync 00:03:57.725 EAL: No shared files mode enabled, IPC is disabled 00:03:57.725 EAL: Heap on socket 0 was shrunk by 6MB 00:03:57.725 EAL: Trying to obtain current memory policy. 00:03:57.725 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.725 EAL: Restoring previous memory policy: 4 00:03:57.725 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.725 EAL: request: mp_malloc_sync 00:03:57.725 EAL: No shared files mode enabled, IPC is disabled 00:03:57.725 EAL: Heap on socket 0 was expanded by 10MB 00:03:57.725 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.725 EAL: request: mp_malloc_sync 00:03:57.725 EAL: No shared files mode enabled, IPC is disabled 00:03:57.725 EAL: Heap on socket 0 was shrunk by 10MB 00:03:57.725 EAL: Trying to obtain current memory policy. 00:03:57.725 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.725 EAL: Restoring previous memory policy: 4 00:03:57.725 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.725 EAL: request: mp_malloc_sync 00:03:57.725 EAL: No shared files mode enabled, IPC is disabled 00:03:57.725 EAL: Heap on socket 0 was expanded by 18MB 00:03:57.725 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.725 EAL: request: mp_malloc_sync 00:03:57.725 EAL: No shared files mode enabled, IPC is disabled 00:03:57.725 EAL: Heap on socket 0 was shrunk by 18MB 00:03:57.725 EAL: Trying to obtain current memory policy. 00:03:57.725 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.725 EAL: Restoring previous memory policy: 4 00:03:57.725 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.725 EAL: request: mp_malloc_sync 00:03:57.725 EAL: No shared files mode enabled, IPC is disabled 00:03:57.725 EAL: Heap on socket 0 was expanded by 34MB 00:03:57.725 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.725 EAL: request: mp_malloc_sync 00:03:57.725 EAL: No shared files mode enabled, IPC is disabled 00:03:57.725 EAL: Heap on socket 0 was shrunk by 34MB 00:03:57.725 EAL: Trying to obtain current memory policy. 00:03:57.725 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.725 EAL: Restoring previous memory policy: 4 00:03:57.725 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.725 EAL: request: mp_malloc_sync 00:03:57.725 EAL: No shared files mode enabled, IPC is disabled 00:03:57.725 EAL: Heap on socket 0 was expanded by 66MB 00:03:57.725 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.983 EAL: request: mp_malloc_sync 00:03:57.983 EAL: No shared files mode enabled, IPC is disabled 00:03:57.983 EAL: Heap on socket 0 was shrunk by 66MB 00:03:57.983 EAL: Trying to obtain current memory policy. 00:03:57.983 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.983 EAL: Restoring previous memory policy: 4 00:03:57.983 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.983 EAL: request: mp_malloc_sync 00:03:57.983 EAL: No shared files mode enabled, IPC is disabled 00:03:57.983 EAL: Heap on socket 0 was expanded by 130MB 00:03:57.983 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.983 EAL: request: mp_malloc_sync 00:03:57.983 EAL: No shared files mode enabled, IPC is disabled 00:03:57.983 EAL: Heap on socket 0 was shrunk by 130MB 00:03:57.983 EAL: Trying to obtain current memory policy. 00:03:57.983 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.983 EAL: Restoring previous memory policy: 4 00:03:57.983 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.983 EAL: request: mp_malloc_sync 00:03:57.983 EAL: No shared files mode enabled, IPC is disabled 00:03:57.983 EAL: Heap on socket 0 was expanded by 258MB 00:03:58.242 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.242 EAL: request: mp_malloc_sync 00:03:58.242 EAL: No shared files mode enabled, IPC is disabled 00:03:58.242 EAL: Heap on socket 0 was shrunk by 258MB 00:03:58.242 EAL: Trying to obtain current memory policy. 00:03:58.242 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.500 EAL: Restoring previous memory policy: 4 00:03:58.500 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.500 EAL: request: mp_malloc_sync 00:03:58.500 EAL: No shared files mode enabled, IPC is disabled 00:03:58.500 EAL: Heap on socket 0 was expanded by 514MB 00:03:58.500 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.758 EAL: request: mp_malloc_sync 00:03:58.758 EAL: No shared files mode enabled, IPC is disabled 00:03:58.758 EAL: Heap on socket 0 was shrunk by 514MB 00:03:58.758 EAL: Trying to obtain current memory policy. 00:03:58.758 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:59.016 EAL: Restoring previous memory policy: 4 00:03:59.016 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.016 EAL: request: mp_malloc_sync 00:03:59.016 EAL: No shared files mode enabled, IPC is disabled 00:03:59.016 EAL: Heap on socket 0 was expanded by 1026MB 00:03:59.275 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.533 EAL: request: mp_malloc_sync 00:03:59.533 EAL: No shared files mode enabled, IPC is disabled 00:03:59.533 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:59.533 passed 00:03:59.533 00:03:59.533 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.533 suites 1 1 n/a 0 0 00:03:59.533 tests 2 2 2 0 0 00:03:59.533 asserts 5267 5267 5267 0 n/a 00:03:59.533 00:03:59.533 Elapsed time = 1.774 seconds 00:03:59.533 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.533 EAL: request: mp_malloc_sync 00:03:59.533 EAL: No shared files mode enabled, IPC is disabled 00:03:59.533 EAL: Heap on socket 0 was shrunk by 2MB 00:03:59.533 EAL: No shared files mode enabled, IPC is disabled 00:03:59.533 EAL: No shared files mode enabled, IPC is disabled 00:03:59.533 EAL: No shared files mode enabled, IPC is disabled 00:03:59.533 00:03:59.533 real 0m1.983s 00:03:59.533 user 0m1.143s 00:03:59.533 sys 0m0.700s 00:03:59.533 15:27:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:59.533 15:27:00 -- common/autotest_common.sh@10 -- # set +x 00:03:59.533 ************************************ 00:03:59.533 END TEST env_vtophys 00:03:59.533 ************************************ 00:03:59.533 15:27:00 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:59.533 15:27:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:59.533 15:27:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:59.533 15:27:00 -- common/autotest_common.sh@10 -- # set +x 00:03:59.791 ************************************ 00:03:59.791 START TEST env_pci 00:03:59.791 ************************************ 00:03:59.791 15:27:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:59.791 00:03:59.791 00:03:59.791 CUnit - A unit testing framework for C - Version 2.1-3 00:03:59.791 http://cunit.sourceforge.net/ 00:03:59.791 00:03:59.791 00:03:59.791 Suite: pci 00:03:59.791 Test: pci_hook ...[2024-04-17 15:27:01.040947] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58212 has claimed it 00:03:59.791 passed 00:03:59.791 00:03:59.791 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.791 suites 1 1 n/a 0 0 00:03:59.791 tests 1 1 1 0 0 00:03:59.791 asserts 25 25 25 0 n/a 00:03:59.791 00:03:59.791 Elapsed time = 0.003 seconds 00:03:59.791 EAL: Cannot find device (10000:00:01.0) 00:03:59.791 EAL: Failed to attach device on primary process 00:03:59.791 00:03:59.791 real 0m0.022s 00:03:59.791 user 0m0.009s 00:03:59.791 sys 0m0.012s 00:03:59.791 15:27:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:59.791 15:27:01 -- common/autotest_common.sh@10 -- # set +x 00:03:59.791 ************************************ 00:03:59.791 END TEST env_pci 00:03:59.791 ************************************ 00:03:59.791 15:27:01 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:59.791 15:27:01 -- env/env.sh@15 -- # uname 00:03:59.791 15:27:01 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:59.791 15:27:01 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:59.791 15:27:01 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:59.791 15:27:01 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:03:59.791 15:27:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:59.791 15:27:01 -- common/autotest_common.sh@10 -- # set +x 00:03:59.791 ************************************ 00:03:59.791 START TEST env_dpdk_post_init 00:03:59.791 ************************************ 00:03:59.791 15:27:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:59.791 EAL: Detected CPU lcores: 10 00:03:59.791 EAL: Detected NUMA nodes: 1 00:03:59.791 EAL: Detected shared linkage of DPDK 00:03:59.791 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:59.791 EAL: Selected IOVA mode 'PA' 00:04:00.049 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:00.049 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:00.049 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:00.049 Starting DPDK initialization... 00:04:00.049 Starting SPDK post initialization... 00:04:00.049 SPDK NVMe probe 00:04:00.049 Attaching to 0000:00:10.0 00:04:00.049 Attaching to 0000:00:11.0 00:04:00.049 Attached to 0000:00:10.0 00:04:00.049 Attached to 0000:00:11.0 00:04:00.049 Cleaning up... 00:04:00.049 00:04:00.049 real 0m0.190s 00:04:00.049 user 0m0.047s 00:04:00.049 sys 0m0.042s 00:04:00.049 15:27:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:00.049 15:27:01 -- common/autotest_common.sh@10 -- # set +x 00:04:00.049 ************************************ 00:04:00.049 END TEST env_dpdk_post_init 00:04:00.049 ************************************ 00:04:00.049 15:27:01 -- env/env.sh@26 -- # uname 00:04:00.049 15:27:01 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:00.049 15:27:01 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:00.049 15:27:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:00.049 15:27:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:00.049 15:27:01 -- common/autotest_common.sh@10 -- # set +x 00:04:00.308 ************************************ 00:04:00.308 START TEST env_mem_callbacks 00:04:00.308 ************************************ 00:04:00.308 15:27:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:00.308 EAL: Detected CPU lcores: 10 00:04:00.308 EAL: Detected NUMA nodes: 1 00:04:00.308 EAL: Detected shared linkage of DPDK 00:04:00.308 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:00.308 EAL: Selected IOVA mode 'PA' 00:04:00.308 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:00.308 00:04:00.308 00:04:00.308 CUnit - A unit testing framework for C - Version 2.1-3 00:04:00.308 http://cunit.sourceforge.net/ 00:04:00.308 00:04:00.308 00:04:00.308 Suite: memory 00:04:00.308 Test: test ... 00:04:00.308 register 0x200000200000 2097152 00:04:00.308 malloc 3145728 00:04:00.308 register 0x200000400000 4194304 00:04:00.308 buf 0x200000500000 len 3145728 PASSED 00:04:00.308 malloc 64 00:04:00.308 buf 0x2000004fff40 len 64 PASSED 00:04:00.308 malloc 4194304 00:04:00.308 register 0x200000800000 6291456 00:04:00.308 buf 0x200000a00000 len 4194304 PASSED 00:04:00.308 free 0x200000500000 3145728 00:04:00.308 free 0x2000004fff40 64 00:04:00.308 unregister 0x200000400000 4194304 PASSED 00:04:00.308 free 0x200000a00000 4194304 00:04:00.308 unregister 0x200000800000 6291456 PASSED 00:04:00.308 malloc 8388608 00:04:00.308 register 0x200000400000 10485760 00:04:00.308 buf 0x200000600000 len 8388608 PASSED 00:04:00.308 free 0x200000600000 8388608 00:04:00.308 unregister 0x200000400000 10485760 PASSED 00:04:00.308 passed 00:04:00.308 00:04:00.308 Run Summary: Type Total Ran Passed Failed Inactive 00:04:00.308 suites 1 1 n/a 0 0 00:04:00.308 tests 1 1 1 0 0 00:04:00.308 asserts 15 15 15 0 n/a 00:04:00.308 00:04:00.308 Elapsed time = 0.010 seconds 00:04:00.308 00:04:00.308 real 0m0.149s 00:04:00.308 user 0m0.021s 00:04:00.308 sys 0m0.025s 00:04:00.308 15:27:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:00.308 15:27:01 -- common/autotest_common.sh@10 -- # set +x 00:04:00.308 ************************************ 00:04:00.308 END TEST env_mem_callbacks 00:04:00.308 ************************************ 00:04:00.308 00:04:00.308 real 0m3.276s 00:04:00.308 user 0m1.676s 00:04:00.308 sys 0m1.168s 00:04:00.308 15:27:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:00.308 15:27:01 -- common/autotest_common.sh@10 -- # set +x 00:04:00.308 ************************************ 00:04:00.308 END TEST env 00:04:00.308 ************************************ 00:04:00.308 15:27:01 -- spdk/autotest.sh@165 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:00.308 15:27:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:00.308 15:27:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:00.308 15:27:01 -- common/autotest_common.sh@10 -- # set +x 00:04:00.566 ************************************ 00:04:00.566 START TEST rpc 00:04:00.566 ************************************ 00:04:00.566 15:27:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:00.566 * Looking for test storage... 00:04:00.566 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:00.566 15:27:01 -- rpc/rpc.sh@65 -- # spdk_pid=58334 00:04:00.566 15:27:01 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:00.566 15:27:01 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:00.566 15:27:01 -- rpc/rpc.sh@67 -- # waitforlisten 58334 00:04:00.566 15:27:01 -- common/autotest_common.sh@817 -- # '[' -z 58334 ']' 00:04:00.566 15:27:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:00.566 15:27:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:00.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:00.566 15:27:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:00.566 15:27:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:00.566 15:27:01 -- common/autotest_common.sh@10 -- # set +x 00:04:00.566 [2024-04-17 15:27:01.941894] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:04:00.566 [2024-04-17 15:27:01.942008] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58334 ] 00:04:00.824 [2024-04-17 15:27:02.075658] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:00.824 [2024-04-17 15:27:02.236672] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:00.824 [2024-04-17 15:27:02.236740] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58334' to capture a snapshot of events at runtime. 00:04:00.824 [2024-04-17 15:27:02.236768] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:00.824 [2024-04-17 15:27:02.236781] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:00.824 [2024-04-17 15:27:02.236790] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58334 for offline analysis/debug. 00:04:00.824 [2024-04-17 15:27:02.236829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:01.758 15:27:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:01.758 15:27:02 -- common/autotest_common.sh@850 -- # return 0 00:04:01.758 15:27:02 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:01.758 15:27:02 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:01.758 15:27:02 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:01.758 15:27:02 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:01.758 15:27:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:01.758 15:27:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:01.758 15:27:02 -- common/autotest_common.sh@10 -- # set +x 00:04:01.758 ************************************ 00:04:01.758 START TEST rpc_integrity 00:04:01.758 ************************************ 00:04:01.758 15:27:03 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:04:01.758 15:27:03 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:01.758 15:27:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:01.758 15:27:03 -- common/autotest_common.sh@10 -- # set +x 00:04:01.758 15:27:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:01.758 15:27:03 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:01.758 15:27:03 -- rpc/rpc.sh@13 -- # jq length 00:04:01.758 15:27:03 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:01.758 15:27:03 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:01.758 15:27:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:01.758 15:27:03 -- common/autotest_common.sh@10 -- # set +x 00:04:01.758 15:27:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:01.758 15:27:03 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:01.758 15:27:03 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:01.758 15:27:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:01.758 15:27:03 -- common/autotest_common.sh@10 -- # set +x 00:04:01.758 15:27:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:01.758 15:27:03 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:01.758 { 00:04:01.758 "name": "Malloc0", 00:04:01.758 "aliases": [ 00:04:01.758 "06e32776-8d66-4e59-9322-904eec27bdd7" 00:04:01.758 ], 00:04:01.758 "product_name": "Malloc disk", 00:04:01.758 "block_size": 512, 00:04:01.758 "num_blocks": 16384, 00:04:01.758 "uuid": "06e32776-8d66-4e59-9322-904eec27bdd7", 00:04:01.758 "assigned_rate_limits": { 00:04:01.758 "rw_ios_per_sec": 0, 00:04:01.758 "rw_mbytes_per_sec": 0, 00:04:01.758 "r_mbytes_per_sec": 0, 00:04:01.758 "w_mbytes_per_sec": 0 00:04:01.758 }, 00:04:01.758 "claimed": false, 00:04:01.758 "zoned": false, 00:04:01.758 "supported_io_types": { 00:04:01.758 "read": true, 00:04:01.758 "write": true, 00:04:01.758 "unmap": true, 00:04:01.758 "write_zeroes": true, 00:04:01.758 "flush": true, 00:04:01.758 "reset": true, 00:04:01.758 "compare": false, 00:04:01.758 "compare_and_write": false, 00:04:01.758 "abort": true, 00:04:01.758 "nvme_admin": false, 00:04:01.758 "nvme_io": false 00:04:01.758 }, 00:04:01.758 "memory_domains": [ 00:04:01.758 { 00:04:01.758 "dma_device_id": "system", 00:04:01.758 "dma_device_type": 1 00:04:01.758 }, 00:04:01.758 { 00:04:01.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:01.758 "dma_device_type": 2 00:04:01.758 } 00:04:01.758 ], 00:04:01.758 "driver_specific": {} 00:04:01.758 } 00:04:01.758 ]' 00:04:01.758 15:27:03 -- rpc/rpc.sh@17 -- # jq length 00:04:01.758 15:27:03 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:01.758 15:27:03 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:01.758 15:27:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:01.758 15:27:03 -- common/autotest_common.sh@10 -- # set +x 00:04:01.758 [2024-04-17 15:27:03.183258] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:01.758 [2024-04-17 15:27:03.183313] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:01.758 [2024-04-17 15:27:03.183332] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x801b10 00:04:01.758 [2024-04-17 15:27:03.183342] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:01.758 [2024-04-17 15:27:03.184947] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:01.758 [2024-04-17 15:27:03.184981] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:01.758 Passthru0 00:04:01.758 15:27:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:01.758 15:27:03 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:01.758 15:27:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:01.758 15:27:03 -- common/autotest_common.sh@10 -- # set +x 00:04:02.016 15:27:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:02.017 15:27:03 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:02.017 { 00:04:02.017 "name": "Malloc0", 00:04:02.017 "aliases": [ 00:04:02.017 "06e32776-8d66-4e59-9322-904eec27bdd7" 00:04:02.017 ], 00:04:02.017 "product_name": "Malloc disk", 00:04:02.017 "block_size": 512, 00:04:02.017 "num_blocks": 16384, 00:04:02.017 "uuid": "06e32776-8d66-4e59-9322-904eec27bdd7", 00:04:02.017 "assigned_rate_limits": { 00:04:02.017 "rw_ios_per_sec": 0, 00:04:02.017 "rw_mbytes_per_sec": 0, 00:04:02.017 "r_mbytes_per_sec": 0, 00:04:02.017 "w_mbytes_per_sec": 0 00:04:02.017 }, 00:04:02.017 "claimed": true, 00:04:02.017 "claim_type": "exclusive_write", 00:04:02.017 "zoned": false, 00:04:02.017 "supported_io_types": { 00:04:02.017 "read": true, 00:04:02.017 "write": true, 00:04:02.017 "unmap": true, 00:04:02.017 "write_zeroes": true, 00:04:02.017 "flush": true, 00:04:02.017 "reset": true, 00:04:02.017 "compare": false, 00:04:02.017 "compare_and_write": false, 00:04:02.017 "abort": true, 00:04:02.017 "nvme_admin": false, 00:04:02.017 "nvme_io": false 00:04:02.017 }, 00:04:02.017 "memory_domains": [ 00:04:02.017 { 00:04:02.017 "dma_device_id": "system", 00:04:02.017 "dma_device_type": 1 00:04:02.017 }, 00:04:02.017 { 00:04:02.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:02.017 "dma_device_type": 2 00:04:02.017 } 00:04:02.017 ], 00:04:02.017 "driver_specific": {} 00:04:02.017 }, 00:04:02.017 { 00:04:02.017 "name": "Passthru0", 00:04:02.017 "aliases": [ 00:04:02.017 "25f13790-e001-5396-9576-04e6dcbb527c" 00:04:02.017 ], 00:04:02.017 "product_name": "passthru", 00:04:02.017 "block_size": 512, 00:04:02.017 "num_blocks": 16384, 00:04:02.017 "uuid": "25f13790-e001-5396-9576-04e6dcbb527c", 00:04:02.017 "assigned_rate_limits": { 00:04:02.017 "rw_ios_per_sec": 0, 00:04:02.017 "rw_mbytes_per_sec": 0, 00:04:02.017 "r_mbytes_per_sec": 0, 00:04:02.017 "w_mbytes_per_sec": 0 00:04:02.017 }, 00:04:02.017 "claimed": false, 00:04:02.017 "zoned": false, 00:04:02.017 "supported_io_types": { 00:04:02.017 "read": true, 00:04:02.017 "write": true, 00:04:02.017 "unmap": true, 00:04:02.017 "write_zeroes": true, 00:04:02.017 "flush": true, 00:04:02.017 "reset": true, 00:04:02.017 "compare": false, 00:04:02.017 "compare_and_write": false, 00:04:02.017 "abort": true, 00:04:02.017 "nvme_admin": false, 00:04:02.017 "nvme_io": false 00:04:02.017 }, 00:04:02.017 "memory_domains": [ 00:04:02.017 { 00:04:02.017 "dma_device_id": "system", 00:04:02.017 "dma_device_type": 1 00:04:02.017 }, 00:04:02.017 { 00:04:02.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:02.017 "dma_device_type": 2 00:04:02.017 } 00:04:02.017 ], 00:04:02.017 "driver_specific": { 00:04:02.017 "passthru": { 00:04:02.017 "name": "Passthru0", 00:04:02.017 "base_bdev_name": "Malloc0" 00:04:02.017 } 00:04:02.017 } 00:04:02.017 } 00:04:02.017 ]' 00:04:02.017 15:27:03 -- rpc/rpc.sh@21 -- # jq length 00:04:02.017 15:27:03 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:02.017 15:27:03 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:02.017 15:27:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:02.017 15:27:03 -- common/autotest_common.sh@10 -- # set +x 00:04:02.017 15:27:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:02.017 15:27:03 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:02.017 15:27:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:02.017 15:27:03 -- common/autotest_common.sh@10 -- # set +x 00:04:02.017 15:27:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:02.017 15:27:03 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:02.017 15:27:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:02.017 15:27:03 -- common/autotest_common.sh@10 -- # set +x 00:04:02.017 15:27:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:02.017 15:27:03 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:02.017 15:27:03 -- rpc/rpc.sh@26 -- # jq length 00:04:02.017 ************************************ 00:04:02.017 END TEST rpc_integrity 00:04:02.017 ************************************ 00:04:02.017 15:27:03 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:02.017 00:04:02.017 real 0m0.341s 00:04:02.017 user 0m0.216s 00:04:02.017 sys 0m0.050s 00:04:02.017 15:27:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:02.017 15:27:03 -- common/autotest_common.sh@10 -- # set +x 00:04:02.017 15:27:03 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:02.017 15:27:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:02.017 15:27:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:02.017 15:27:03 -- common/autotest_common.sh@10 -- # set +x 00:04:02.278 ************************************ 00:04:02.278 START TEST rpc_plugins 00:04:02.278 ************************************ 00:04:02.278 15:27:03 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:04:02.278 15:27:03 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:02.278 15:27:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:02.278 15:27:03 -- common/autotest_common.sh@10 -- # set +x 00:04:02.278 15:27:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:02.278 15:27:03 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:02.278 15:27:03 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:02.278 15:27:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:02.278 15:27:03 -- common/autotest_common.sh@10 -- # set +x 00:04:02.278 15:27:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:02.278 15:27:03 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:02.278 { 00:04:02.278 "name": "Malloc1", 00:04:02.278 "aliases": [ 00:04:02.278 "4fb54b98-9277-417e-831f-f1547e27ccf0" 00:04:02.278 ], 00:04:02.278 "product_name": "Malloc disk", 00:04:02.278 "block_size": 4096, 00:04:02.278 "num_blocks": 256, 00:04:02.278 "uuid": "4fb54b98-9277-417e-831f-f1547e27ccf0", 00:04:02.278 "assigned_rate_limits": { 00:04:02.278 "rw_ios_per_sec": 0, 00:04:02.278 "rw_mbytes_per_sec": 0, 00:04:02.278 "r_mbytes_per_sec": 0, 00:04:02.278 "w_mbytes_per_sec": 0 00:04:02.278 }, 00:04:02.278 "claimed": false, 00:04:02.278 "zoned": false, 00:04:02.278 "supported_io_types": { 00:04:02.278 "read": true, 00:04:02.278 "write": true, 00:04:02.278 "unmap": true, 00:04:02.278 "write_zeroes": true, 00:04:02.278 "flush": true, 00:04:02.278 "reset": true, 00:04:02.278 "compare": false, 00:04:02.278 "compare_and_write": false, 00:04:02.278 "abort": true, 00:04:02.278 "nvme_admin": false, 00:04:02.278 "nvme_io": false 00:04:02.278 }, 00:04:02.278 "memory_domains": [ 00:04:02.278 { 00:04:02.278 "dma_device_id": "system", 00:04:02.278 "dma_device_type": 1 00:04:02.278 }, 00:04:02.278 { 00:04:02.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:02.278 "dma_device_type": 2 00:04:02.278 } 00:04:02.278 ], 00:04:02.278 "driver_specific": {} 00:04:02.278 } 00:04:02.278 ]' 00:04:02.278 15:27:03 -- rpc/rpc.sh@32 -- # jq length 00:04:02.278 15:27:03 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:02.279 15:27:03 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:02.279 15:27:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:02.279 15:27:03 -- common/autotest_common.sh@10 -- # set +x 00:04:02.279 15:27:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:02.279 15:27:03 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:02.279 15:27:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:02.279 15:27:03 -- common/autotest_common.sh@10 -- # set +x 00:04:02.279 15:27:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:02.279 15:27:03 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:02.279 15:27:03 -- rpc/rpc.sh@36 -- # jq length 00:04:02.279 ************************************ 00:04:02.279 END TEST rpc_plugins 00:04:02.279 ************************************ 00:04:02.279 15:27:03 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:02.279 00:04:02.279 real 0m0.162s 00:04:02.279 user 0m0.110s 00:04:02.279 sys 0m0.016s 00:04:02.279 15:27:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:02.279 15:27:03 -- common/autotest_common.sh@10 -- # set +x 00:04:02.279 15:27:03 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:02.279 15:27:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:02.279 15:27:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:02.279 15:27:03 -- common/autotest_common.sh@10 -- # set +x 00:04:02.552 ************************************ 00:04:02.552 START TEST rpc_trace_cmd_test 00:04:02.552 ************************************ 00:04:02.552 15:27:03 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:04:02.552 15:27:03 -- rpc/rpc.sh@40 -- # local info 00:04:02.552 15:27:03 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:02.552 15:27:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:02.552 15:27:03 -- common/autotest_common.sh@10 -- # set +x 00:04:02.552 15:27:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:02.552 15:27:03 -- rpc/rpc.sh@42 -- # info='{ 00:04:02.552 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58334", 00:04:02.552 "tpoint_group_mask": "0x8", 00:04:02.552 "iscsi_conn": { 00:04:02.552 "mask": "0x2", 00:04:02.552 "tpoint_mask": "0x0" 00:04:02.552 }, 00:04:02.552 "scsi": { 00:04:02.552 "mask": "0x4", 00:04:02.552 "tpoint_mask": "0x0" 00:04:02.552 }, 00:04:02.552 "bdev": { 00:04:02.552 "mask": "0x8", 00:04:02.552 "tpoint_mask": "0xffffffffffffffff" 00:04:02.552 }, 00:04:02.552 "nvmf_rdma": { 00:04:02.552 "mask": "0x10", 00:04:02.552 "tpoint_mask": "0x0" 00:04:02.552 }, 00:04:02.552 "nvmf_tcp": { 00:04:02.552 "mask": "0x20", 00:04:02.552 "tpoint_mask": "0x0" 00:04:02.552 }, 00:04:02.552 "ftl": { 00:04:02.552 "mask": "0x40", 00:04:02.552 "tpoint_mask": "0x0" 00:04:02.552 }, 00:04:02.552 "blobfs": { 00:04:02.552 "mask": "0x80", 00:04:02.552 "tpoint_mask": "0x0" 00:04:02.552 }, 00:04:02.552 "dsa": { 00:04:02.552 "mask": "0x200", 00:04:02.552 "tpoint_mask": "0x0" 00:04:02.552 }, 00:04:02.552 "thread": { 00:04:02.552 "mask": "0x400", 00:04:02.552 "tpoint_mask": "0x0" 00:04:02.552 }, 00:04:02.552 "nvme_pcie": { 00:04:02.552 "mask": "0x800", 00:04:02.552 "tpoint_mask": "0x0" 00:04:02.552 }, 00:04:02.552 "iaa": { 00:04:02.552 "mask": "0x1000", 00:04:02.552 "tpoint_mask": "0x0" 00:04:02.552 }, 00:04:02.552 "nvme_tcp": { 00:04:02.552 "mask": "0x2000", 00:04:02.552 "tpoint_mask": "0x0" 00:04:02.552 }, 00:04:02.552 "bdev_nvme": { 00:04:02.552 "mask": "0x4000", 00:04:02.552 "tpoint_mask": "0x0" 00:04:02.552 }, 00:04:02.552 "sock": { 00:04:02.552 "mask": "0x8000", 00:04:02.552 "tpoint_mask": "0x0" 00:04:02.552 } 00:04:02.552 }' 00:04:02.552 15:27:03 -- rpc/rpc.sh@43 -- # jq length 00:04:02.552 15:27:03 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:02.552 15:27:03 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:02.552 15:27:03 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:02.552 15:27:03 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:02.552 15:27:03 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:02.552 15:27:03 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:02.552 15:27:03 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:02.552 15:27:03 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:02.810 15:27:04 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:02.810 00:04:02.810 real 0m0.285s 00:04:02.810 user 0m0.250s 00:04:02.810 sys 0m0.025s 00:04:02.810 15:27:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:02.810 15:27:04 -- common/autotest_common.sh@10 -- # set +x 00:04:02.810 ************************************ 00:04:02.810 END TEST rpc_trace_cmd_test 00:04:02.810 ************************************ 00:04:02.810 15:27:04 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:02.810 15:27:04 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:02.810 15:27:04 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:02.810 15:27:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:02.810 15:27:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:02.810 15:27:04 -- common/autotest_common.sh@10 -- # set +x 00:04:02.810 ************************************ 00:04:02.810 START TEST rpc_daemon_integrity 00:04:02.810 ************************************ 00:04:02.810 15:27:04 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:04:02.810 15:27:04 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:02.810 15:27:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:02.811 15:27:04 -- common/autotest_common.sh@10 -- # set +x 00:04:02.811 15:27:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:02.811 15:27:04 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:02.811 15:27:04 -- rpc/rpc.sh@13 -- # jq length 00:04:02.811 15:27:04 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:02.811 15:27:04 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:02.811 15:27:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:02.811 15:27:04 -- common/autotest_common.sh@10 -- # set +x 00:04:02.811 15:27:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:02.811 15:27:04 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:02.811 15:27:04 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:02.811 15:27:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:02.811 15:27:04 -- common/autotest_common.sh@10 -- # set +x 00:04:02.811 15:27:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:02.811 15:27:04 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:02.811 { 00:04:02.811 "name": "Malloc2", 00:04:02.811 "aliases": [ 00:04:02.811 "e063967d-5a01-4fcf-b93b-b843153d3808" 00:04:02.811 ], 00:04:02.811 "product_name": "Malloc disk", 00:04:02.811 "block_size": 512, 00:04:02.811 "num_blocks": 16384, 00:04:02.811 "uuid": "e063967d-5a01-4fcf-b93b-b843153d3808", 00:04:02.811 "assigned_rate_limits": { 00:04:02.811 "rw_ios_per_sec": 0, 00:04:02.811 "rw_mbytes_per_sec": 0, 00:04:02.811 "r_mbytes_per_sec": 0, 00:04:02.811 "w_mbytes_per_sec": 0 00:04:02.811 }, 00:04:02.811 "claimed": false, 00:04:02.811 "zoned": false, 00:04:02.811 "supported_io_types": { 00:04:02.811 "read": true, 00:04:02.811 "write": true, 00:04:02.811 "unmap": true, 00:04:02.811 "write_zeroes": true, 00:04:02.811 "flush": true, 00:04:02.811 "reset": true, 00:04:02.811 "compare": false, 00:04:02.811 "compare_and_write": false, 00:04:02.811 "abort": true, 00:04:02.811 "nvme_admin": false, 00:04:02.811 "nvme_io": false 00:04:02.811 }, 00:04:02.811 "memory_domains": [ 00:04:02.811 { 00:04:02.811 "dma_device_id": "system", 00:04:02.811 "dma_device_type": 1 00:04:02.811 }, 00:04:02.811 { 00:04:02.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:02.811 "dma_device_type": 2 00:04:02.811 } 00:04:02.811 ], 00:04:02.811 "driver_specific": {} 00:04:02.811 } 00:04:02.811 ]' 00:04:03.068 15:27:04 -- rpc/rpc.sh@17 -- # jq length 00:04:03.068 15:27:04 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:03.068 15:27:04 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:03.068 15:27:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:03.068 15:27:04 -- common/autotest_common.sh@10 -- # set +x 00:04:03.068 [2024-04-17 15:27:04.305447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:03.068 [2024-04-17 15:27:04.305504] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:03.068 [2024-04-17 15:27:04.305524] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x8f6900 00:04:03.068 [2024-04-17 15:27:04.305533] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:03.068 [2024-04-17 15:27:04.307022] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:03.068 [2024-04-17 15:27:04.307057] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:03.068 Passthru0 00:04:03.068 15:27:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:03.068 15:27:04 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:03.068 15:27:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:03.068 15:27:04 -- common/autotest_common.sh@10 -- # set +x 00:04:03.068 15:27:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:03.068 15:27:04 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:03.068 { 00:04:03.068 "name": "Malloc2", 00:04:03.068 "aliases": [ 00:04:03.068 "e063967d-5a01-4fcf-b93b-b843153d3808" 00:04:03.068 ], 00:04:03.068 "product_name": "Malloc disk", 00:04:03.068 "block_size": 512, 00:04:03.068 "num_blocks": 16384, 00:04:03.068 "uuid": "e063967d-5a01-4fcf-b93b-b843153d3808", 00:04:03.068 "assigned_rate_limits": { 00:04:03.068 "rw_ios_per_sec": 0, 00:04:03.068 "rw_mbytes_per_sec": 0, 00:04:03.068 "r_mbytes_per_sec": 0, 00:04:03.068 "w_mbytes_per_sec": 0 00:04:03.068 }, 00:04:03.068 "claimed": true, 00:04:03.068 "claim_type": "exclusive_write", 00:04:03.068 "zoned": false, 00:04:03.068 "supported_io_types": { 00:04:03.068 "read": true, 00:04:03.068 "write": true, 00:04:03.068 "unmap": true, 00:04:03.068 "write_zeroes": true, 00:04:03.068 "flush": true, 00:04:03.069 "reset": true, 00:04:03.069 "compare": false, 00:04:03.069 "compare_and_write": false, 00:04:03.069 "abort": true, 00:04:03.069 "nvme_admin": false, 00:04:03.069 "nvme_io": false 00:04:03.069 }, 00:04:03.069 "memory_domains": [ 00:04:03.069 { 00:04:03.069 "dma_device_id": "system", 00:04:03.069 "dma_device_type": 1 00:04:03.069 }, 00:04:03.069 { 00:04:03.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:03.069 "dma_device_type": 2 00:04:03.069 } 00:04:03.069 ], 00:04:03.069 "driver_specific": {} 00:04:03.069 }, 00:04:03.069 { 00:04:03.069 "name": "Passthru0", 00:04:03.069 "aliases": [ 00:04:03.069 "06198fa6-961a-516a-a326-ad76f69482b7" 00:04:03.069 ], 00:04:03.069 "product_name": "passthru", 00:04:03.069 "block_size": 512, 00:04:03.069 "num_blocks": 16384, 00:04:03.069 "uuid": "06198fa6-961a-516a-a326-ad76f69482b7", 00:04:03.069 "assigned_rate_limits": { 00:04:03.069 "rw_ios_per_sec": 0, 00:04:03.069 "rw_mbytes_per_sec": 0, 00:04:03.069 "r_mbytes_per_sec": 0, 00:04:03.069 "w_mbytes_per_sec": 0 00:04:03.069 }, 00:04:03.069 "claimed": false, 00:04:03.069 "zoned": false, 00:04:03.069 "supported_io_types": { 00:04:03.069 "read": true, 00:04:03.069 "write": true, 00:04:03.069 "unmap": true, 00:04:03.069 "write_zeroes": true, 00:04:03.069 "flush": true, 00:04:03.069 "reset": true, 00:04:03.069 "compare": false, 00:04:03.069 "compare_and_write": false, 00:04:03.069 "abort": true, 00:04:03.069 "nvme_admin": false, 00:04:03.069 "nvme_io": false 00:04:03.069 }, 00:04:03.069 "memory_domains": [ 00:04:03.069 { 00:04:03.069 "dma_device_id": "system", 00:04:03.069 "dma_device_type": 1 00:04:03.069 }, 00:04:03.069 { 00:04:03.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:03.069 "dma_device_type": 2 00:04:03.069 } 00:04:03.069 ], 00:04:03.069 "driver_specific": { 00:04:03.069 "passthru": { 00:04:03.069 "name": "Passthru0", 00:04:03.069 "base_bdev_name": "Malloc2" 00:04:03.069 } 00:04:03.069 } 00:04:03.069 } 00:04:03.069 ]' 00:04:03.069 15:27:04 -- rpc/rpc.sh@21 -- # jq length 00:04:03.069 15:27:04 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:03.069 15:27:04 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:03.069 15:27:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:03.069 15:27:04 -- common/autotest_common.sh@10 -- # set +x 00:04:03.069 15:27:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:03.069 15:27:04 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:03.069 15:27:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:03.069 15:27:04 -- common/autotest_common.sh@10 -- # set +x 00:04:03.069 15:27:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:03.069 15:27:04 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:03.069 15:27:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:03.069 15:27:04 -- common/autotest_common.sh@10 -- # set +x 00:04:03.069 15:27:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:03.069 15:27:04 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:03.069 15:27:04 -- rpc/rpc.sh@26 -- # jq length 00:04:03.069 ************************************ 00:04:03.069 END TEST rpc_daemon_integrity 00:04:03.069 ************************************ 00:04:03.069 15:27:04 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:03.069 00:04:03.069 real 0m0.314s 00:04:03.069 user 0m0.208s 00:04:03.069 sys 0m0.041s 00:04:03.069 15:27:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:03.069 15:27:04 -- common/autotest_common.sh@10 -- # set +x 00:04:03.327 15:27:04 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:03.327 15:27:04 -- rpc/rpc.sh@84 -- # killprocess 58334 00:04:03.327 15:27:04 -- common/autotest_common.sh@936 -- # '[' -z 58334 ']' 00:04:03.327 15:27:04 -- common/autotest_common.sh@940 -- # kill -0 58334 00:04:03.327 15:27:04 -- common/autotest_common.sh@941 -- # uname 00:04:03.327 15:27:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:03.327 15:27:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58334 00:04:03.327 killing process with pid 58334 00:04:03.327 15:27:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:03.327 15:27:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:03.327 15:27:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58334' 00:04:03.327 15:27:04 -- common/autotest_common.sh@955 -- # kill 58334 00:04:03.327 15:27:04 -- common/autotest_common.sh@960 -- # wait 58334 00:04:03.895 00:04:03.895 real 0m3.360s 00:04:03.895 user 0m4.249s 00:04:03.895 sys 0m0.876s 00:04:03.895 ************************************ 00:04:03.895 END TEST rpc 00:04:03.895 ************************************ 00:04:03.895 15:27:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:03.895 15:27:05 -- common/autotest_common.sh@10 -- # set +x 00:04:03.895 15:27:05 -- spdk/autotest.sh@166 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:03.895 15:27:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:03.895 15:27:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:03.895 15:27:05 -- common/autotest_common.sh@10 -- # set +x 00:04:03.895 ************************************ 00:04:03.895 START TEST rpc_client 00:04:03.895 ************************************ 00:04:03.895 15:27:05 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:04.154 * Looking for test storage... 00:04:04.154 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:04.154 15:27:05 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:04.154 OK 00:04:04.154 15:27:05 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:04.154 00:04:04.154 real 0m0.106s 00:04:04.154 user 0m0.043s 00:04:04.154 sys 0m0.068s 00:04:04.154 ************************************ 00:04:04.154 END TEST rpc_client 00:04:04.154 ************************************ 00:04:04.154 15:27:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:04.154 15:27:05 -- common/autotest_common.sh@10 -- # set +x 00:04:04.154 15:27:05 -- spdk/autotest.sh@167 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:04.154 15:27:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:04.154 15:27:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:04.154 15:27:05 -- common/autotest_common.sh@10 -- # set +x 00:04:04.154 ************************************ 00:04:04.154 START TEST json_config 00:04:04.154 ************************************ 00:04:04.154 15:27:05 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:04.154 15:27:05 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:04.154 15:27:05 -- nvmf/common.sh@7 -- # uname -s 00:04:04.154 15:27:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:04.154 15:27:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:04.154 15:27:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:04.154 15:27:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:04.154 15:27:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:04.154 15:27:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:04.154 15:27:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:04.154 15:27:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:04.154 15:27:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:04.154 15:27:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:04.154 15:27:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02dfa913-00e4-4a25-ab2c-855f7283d4db 00:04:04.154 15:27:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=02dfa913-00e4-4a25-ab2c-855f7283d4db 00:04:04.154 15:27:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:04.154 15:27:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:04.154 15:27:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:04.154 15:27:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:04.154 15:27:05 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:04.154 15:27:05 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:04.154 15:27:05 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:04.154 15:27:05 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:04.154 15:27:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.154 15:27:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.154 15:27:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.154 15:27:05 -- paths/export.sh@5 -- # export PATH 00:04:04.154 15:27:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.154 15:27:05 -- nvmf/common.sh@47 -- # : 0 00:04:04.154 15:27:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:04.154 15:27:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:04.154 15:27:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:04.154 15:27:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:04.154 15:27:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:04.154 15:27:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:04.154 15:27:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:04.154 15:27:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:04.154 15:27:05 -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:04.154 15:27:05 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:04.154 15:27:05 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:04.154 15:27:05 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:04.154 15:27:05 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:04.154 15:27:05 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:04.154 15:27:05 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:04.154 15:27:05 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:04.154 15:27:05 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:04.154 INFO: JSON configuration test init 00:04:04.154 15:27:05 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:04.154 15:27:05 -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:04.154 15:27:05 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:04.154 15:27:05 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:04.154 15:27:05 -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:04.155 15:27:05 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:04.155 15:27:05 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:04.155 15:27:05 -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:04.155 15:27:05 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:04.155 15:27:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:04.155 15:27:05 -- common/autotest_common.sh@10 -- # set +x 00:04:04.155 15:27:05 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:04.155 15:27:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:04.155 15:27:05 -- common/autotest_common.sh@10 -- # set +x 00:04:04.155 Waiting for target to run... 00:04:04.155 15:27:05 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:04.155 15:27:05 -- json_config/common.sh@9 -- # local app=target 00:04:04.155 15:27:05 -- json_config/common.sh@10 -- # shift 00:04:04.155 15:27:05 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:04.155 15:27:05 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:04.155 15:27:05 -- json_config/common.sh@15 -- # local app_extra_params= 00:04:04.155 15:27:05 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:04.155 15:27:05 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:04.155 15:27:05 -- json_config/common.sh@22 -- # app_pid["$app"]=58605 00:04:04.155 15:27:05 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:04.155 15:27:05 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:04.155 15:27:05 -- json_config/common.sh@25 -- # waitforlisten 58605 /var/tmp/spdk_tgt.sock 00:04:04.155 15:27:05 -- common/autotest_common.sh@817 -- # '[' -z 58605 ']' 00:04:04.155 15:27:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:04.155 15:27:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:04.155 15:27:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:04.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:04.155 15:27:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:04.155 15:27:05 -- common/autotest_common.sh@10 -- # set +x 00:04:04.414 [2024-04-17 15:27:05.651776] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:04:04.414 [2024-04-17 15:27:05.652415] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58605 ] 00:04:04.982 [2024-04-17 15:27:06.185423] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:04.982 [2024-04-17 15:27:06.315334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:05.240 15:27:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:05.240 15:27:06 -- common/autotest_common.sh@850 -- # return 0 00:04:05.240 15:27:06 -- json_config/common.sh@26 -- # echo '' 00:04:05.240 00:04:05.240 15:27:06 -- json_config/json_config.sh@269 -- # create_accel_config 00:04:05.240 15:27:06 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:05.240 15:27:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:05.240 15:27:06 -- common/autotest_common.sh@10 -- # set +x 00:04:05.240 15:27:06 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:05.240 15:27:06 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:05.240 15:27:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:05.240 15:27:06 -- common/autotest_common.sh@10 -- # set +x 00:04:05.498 15:27:06 -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:05.498 15:27:06 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:05.498 15:27:06 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:06.066 15:27:07 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:06.066 15:27:07 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:06.066 15:27:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:06.066 15:27:07 -- common/autotest_common.sh@10 -- # set +x 00:04:06.066 15:27:07 -- json_config/json_config.sh@45 -- # local ret=0 00:04:06.066 15:27:07 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:06.066 15:27:07 -- json_config/json_config.sh@46 -- # local enabled_types 00:04:06.066 15:27:07 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:06.066 15:27:07 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:06.066 15:27:07 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:06.325 15:27:07 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:06.325 15:27:07 -- json_config/json_config.sh@48 -- # local get_types 00:04:06.325 15:27:07 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:06.325 15:27:07 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:06.325 15:27:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:06.325 15:27:07 -- common/autotest_common.sh@10 -- # set +x 00:04:06.325 15:27:07 -- json_config/json_config.sh@55 -- # return 0 00:04:06.325 15:27:07 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:06.325 15:27:07 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:06.325 15:27:07 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:06.325 15:27:07 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:06.325 15:27:07 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:06.325 15:27:07 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:06.325 15:27:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:06.325 15:27:07 -- common/autotest_common.sh@10 -- # set +x 00:04:06.325 15:27:07 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:06.325 15:27:07 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:06.325 15:27:07 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:06.325 15:27:07 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:06.325 15:27:07 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:06.583 MallocForNvmf0 00:04:06.583 15:27:07 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:06.583 15:27:07 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:06.842 MallocForNvmf1 00:04:06.842 15:27:08 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:06.842 15:27:08 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:06.842 [2024-04-17 15:27:08.279067] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:07.101 15:27:08 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:07.101 15:27:08 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:07.360 15:27:08 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:07.360 15:27:08 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:07.629 15:27:08 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:07.629 15:27:08 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:07.902 15:27:09 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:07.902 15:27:09 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:07.902 [2024-04-17 15:27:09.279784] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:07.902 15:27:09 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:07.902 15:27:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:07.902 15:27:09 -- common/autotest_common.sh@10 -- # set +x 00:04:07.902 15:27:09 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:07.902 15:27:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:07.902 15:27:09 -- common/autotest_common.sh@10 -- # set +x 00:04:08.160 15:27:09 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:08.160 15:27:09 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:08.160 15:27:09 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:08.419 MallocBdevForConfigChangeCheck 00:04:08.419 15:27:09 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:08.419 15:27:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:08.419 15:27:09 -- common/autotest_common.sh@10 -- # set +x 00:04:08.419 15:27:09 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:08.419 15:27:09 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:08.678 INFO: shutting down applications... 00:04:08.678 15:27:10 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:08.678 15:27:10 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:08.678 15:27:10 -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:08.678 15:27:10 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:08.678 15:27:10 -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:08.937 Calling clear_iscsi_subsystem 00:04:08.937 Calling clear_nvmf_subsystem 00:04:08.937 Calling clear_nbd_subsystem 00:04:08.937 Calling clear_ublk_subsystem 00:04:08.937 Calling clear_vhost_blk_subsystem 00:04:08.937 Calling clear_vhost_scsi_subsystem 00:04:08.937 Calling clear_bdev_subsystem 00:04:08.937 15:27:10 -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:08.937 15:27:10 -- json_config/json_config.sh@343 -- # count=100 00:04:08.937 15:27:10 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:08.937 15:27:10 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:08.937 15:27:10 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:08.937 15:27:10 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:09.504 15:27:10 -- json_config/json_config.sh@345 -- # break 00:04:09.504 15:27:10 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:09.504 15:27:10 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:09.504 15:27:10 -- json_config/common.sh@31 -- # local app=target 00:04:09.504 15:27:10 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:09.504 15:27:10 -- json_config/common.sh@35 -- # [[ -n 58605 ]] 00:04:09.504 15:27:10 -- json_config/common.sh@38 -- # kill -SIGINT 58605 00:04:09.504 15:27:10 -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:09.504 15:27:10 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:09.504 15:27:10 -- json_config/common.sh@41 -- # kill -0 58605 00:04:09.504 15:27:10 -- json_config/common.sh@45 -- # sleep 0.5 00:04:10.072 15:27:11 -- json_config/common.sh@40 -- # (( i++ )) 00:04:10.072 15:27:11 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:10.072 15:27:11 -- json_config/common.sh@41 -- # kill -0 58605 00:04:10.072 15:27:11 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:10.072 15:27:11 -- json_config/common.sh@43 -- # break 00:04:10.072 SPDK target shutdown done 00:04:10.072 15:27:11 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:10.072 15:27:11 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:10.072 INFO: relaunching applications... 00:04:10.072 15:27:11 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:10.072 15:27:11 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:10.072 15:27:11 -- json_config/common.sh@9 -- # local app=target 00:04:10.072 15:27:11 -- json_config/common.sh@10 -- # shift 00:04:10.072 15:27:11 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:10.072 15:27:11 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:10.072 15:27:11 -- json_config/common.sh@15 -- # local app_extra_params= 00:04:10.072 15:27:11 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:10.072 15:27:11 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:10.072 Waiting for target to run... 00:04:10.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:10.072 15:27:11 -- json_config/common.sh@22 -- # app_pid["$app"]=58801 00:04:10.072 15:27:11 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:10.072 15:27:11 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:10.072 15:27:11 -- json_config/common.sh@25 -- # waitforlisten 58801 /var/tmp/spdk_tgt.sock 00:04:10.072 15:27:11 -- common/autotest_common.sh@817 -- # '[' -z 58801 ']' 00:04:10.072 15:27:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:10.072 15:27:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:10.072 15:27:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:10.072 15:27:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:10.072 15:27:11 -- common/autotest_common.sh@10 -- # set +x 00:04:10.072 [2024-04-17 15:27:11.341893] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:04:10.072 [2024-04-17 15:27:11.342219] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58801 ] 00:04:10.639 [2024-04-17 15:27:11.857788] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.639 [2024-04-17 15:27:11.981928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.898 [2024-04-17 15:27:12.308442] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:11.157 [2024-04-17 15:27:12.340551] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:11.157 15:27:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:11.157 00:04:11.157 INFO: Checking if target configuration is the same... 00:04:11.157 15:27:12 -- common/autotest_common.sh@850 -- # return 0 00:04:11.157 15:27:12 -- json_config/common.sh@26 -- # echo '' 00:04:11.157 15:27:12 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:11.157 15:27:12 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:11.157 15:27:12 -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:11.157 15:27:12 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:11.157 15:27:12 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:11.157 + '[' 2 -ne 2 ']' 00:04:11.157 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:11.157 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:11.157 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:11.157 +++ basename /dev/fd/62 00:04:11.157 ++ mktemp /tmp/62.XXX 00:04:11.157 + tmp_file_1=/tmp/62.HKd 00:04:11.157 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:11.157 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:11.157 + tmp_file_2=/tmp/spdk_tgt_config.json.K7S 00:04:11.157 + ret=0 00:04:11.157 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:11.416 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:11.416 + diff -u /tmp/62.HKd /tmp/spdk_tgt_config.json.K7S 00:04:11.416 INFO: JSON config files are the same 00:04:11.416 + echo 'INFO: JSON config files are the same' 00:04:11.416 + rm /tmp/62.HKd /tmp/spdk_tgt_config.json.K7S 00:04:11.416 + exit 0 00:04:11.416 15:27:12 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:11.416 INFO: changing configuration and checking if this can be detected... 00:04:11.416 15:27:12 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:11.416 15:27:12 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:11.416 15:27:12 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:11.984 15:27:13 -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:11.984 15:27:13 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:11.984 15:27:13 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:11.984 + '[' 2 -ne 2 ']' 00:04:11.984 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:11.984 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:11.984 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:11.984 +++ basename /dev/fd/62 00:04:11.984 ++ mktemp /tmp/62.XXX 00:04:11.984 + tmp_file_1=/tmp/62.6sn 00:04:11.984 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:11.984 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:11.984 + tmp_file_2=/tmp/spdk_tgt_config.json.D8N 00:04:11.984 + ret=0 00:04:11.984 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:12.266 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:12.266 + diff -u /tmp/62.6sn /tmp/spdk_tgt_config.json.D8N 00:04:12.266 + ret=1 00:04:12.266 + echo '=== Start of file: /tmp/62.6sn ===' 00:04:12.266 + cat /tmp/62.6sn 00:04:12.266 + echo '=== End of file: /tmp/62.6sn ===' 00:04:12.266 + echo '' 00:04:12.266 + echo '=== Start of file: /tmp/spdk_tgt_config.json.D8N ===' 00:04:12.266 + cat /tmp/spdk_tgt_config.json.D8N 00:04:12.266 + echo '=== End of file: /tmp/spdk_tgt_config.json.D8N ===' 00:04:12.266 + echo '' 00:04:12.266 + rm /tmp/62.6sn /tmp/spdk_tgt_config.json.D8N 00:04:12.266 + exit 1 00:04:12.266 INFO: configuration change detected. 00:04:12.266 15:27:13 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:12.266 15:27:13 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:12.266 15:27:13 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:12.266 15:27:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:12.266 15:27:13 -- common/autotest_common.sh@10 -- # set +x 00:04:12.266 15:27:13 -- json_config/json_config.sh@307 -- # local ret=0 00:04:12.266 15:27:13 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:12.266 15:27:13 -- json_config/json_config.sh@317 -- # [[ -n 58801 ]] 00:04:12.266 15:27:13 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:12.266 15:27:13 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:12.266 15:27:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:12.266 15:27:13 -- common/autotest_common.sh@10 -- # set +x 00:04:12.266 15:27:13 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:12.266 15:27:13 -- json_config/json_config.sh@193 -- # uname -s 00:04:12.266 15:27:13 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:12.266 15:27:13 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:12.266 15:27:13 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:12.266 15:27:13 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:12.266 15:27:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:12.266 15:27:13 -- common/autotest_common.sh@10 -- # set +x 00:04:12.266 15:27:13 -- json_config/json_config.sh@323 -- # killprocess 58801 00:04:12.266 15:27:13 -- common/autotest_common.sh@936 -- # '[' -z 58801 ']' 00:04:12.266 15:27:13 -- common/autotest_common.sh@940 -- # kill -0 58801 00:04:12.266 15:27:13 -- common/autotest_common.sh@941 -- # uname 00:04:12.266 15:27:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:12.266 15:27:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58801 00:04:12.266 killing process with pid 58801 00:04:12.266 15:27:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:12.266 15:27:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:12.266 15:27:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58801' 00:04:12.266 15:27:13 -- common/autotest_common.sh@955 -- # kill 58801 00:04:12.266 15:27:13 -- common/autotest_common.sh@960 -- # wait 58801 00:04:12.840 15:27:14 -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:12.840 15:27:14 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:12.840 15:27:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:12.840 15:27:14 -- common/autotest_common.sh@10 -- # set +x 00:04:12.840 15:27:14 -- json_config/json_config.sh@328 -- # return 0 00:04:12.840 INFO: Success 00:04:12.840 15:27:14 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:12.840 00:04:12.840 real 0m8.583s 00:04:12.840 user 0m11.906s 00:04:12.840 sys 0m2.055s 00:04:12.840 15:27:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:12.840 ************************************ 00:04:12.840 END TEST json_config 00:04:12.840 15:27:14 -- common/autotest_common.sh@10 -- # set +x 00:04:12.840 ************************************ 00:04:12.840 15:27:14 -- spdk/autotest.sh@168 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:12.840 15:27:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:12.840 15:27:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:12.840 15:27:14 -- common/autotest_common.sh@10 -- # set +x 00:04:12.840 ************************************ 00:04:12.840 START TEST json_config_extra_key 00:04:12.840 ************************************ 00:04:12.841 15:27:14 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:12.841 15:27:14 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:12.841 15:27:14 -- nvmf/common.sh@7 -- # uname -s 00:04:12.841 15:27:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:12.841 15:27:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:12.841 15:27:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:12.841 15:27:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:12.841 15:27:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:12.841 15:27:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:12.841 15:27:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:12.841 15:27:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:12.841 15:27:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:12.841 15:27:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:12.841 15:27:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02dfa913-00e4-4a25-ab2c-855f7283d4db 00:04:12.841 15:27:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=02dfa913-00e4-4a25-ab2c-855f7283d4db 00:04:12.841 15:27:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:12.841 15:27:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:12.841 15:27:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:12.841 15:27:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:12.841 15:27:14 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:12.841 15:27:14 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:12.841 15:27:14 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:12.841 15:27:14 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:12.841 15:27:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.841 15:27:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.841 15:27:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.841 15:27:14 -- paths/export.sh@5 -- # export PATH 00:04:12.841 15:27:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.841 15:27:14 -- nvmf/common.sh@47 -- # : 0 00:04:12.841 15:27:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:12.841 15:27:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:12.841 15:27:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:12.841 15:27:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:12.841 15:27:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:12.841 15:27:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:12.841 15:27:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:12.841 15:27:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:12.841 15:27:14 -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:12.841 15:27:14 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:12.841 15:27:14 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:12.841 15:27:14 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:12.841 15:27:14 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:13.098 15:27:14 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:13.098 15:27:14 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:13.098 15:27:14 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:13.098 15:27:14 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:13.098 15:27:14 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:13.098 INFO: launching applications... 00:04:13.098 15:27:14 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:13.098 15:27:14 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:13.098 15:27:14 -- json_config/common.sh@9 -- # local app=target 00:04:13.098 15:27:14 -- json_config/common.sh@10 -- # shift 00:04:13.098 15:27:14 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:13.098 15:27:14 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:13.098 15:27:14 -- json_config/common.sh@15 -- # local app_extra_params= 00:04:13.098 15:27:14 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:13.098 15:27:14 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:13.098 15:27:14 -- json_config/common.sh@22 -- # app_pid["$app"]=58955 00:04:13.098 15:27:14 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:13.098 Waiting for target to run... 00:04:13.098 15:27:14 -- json_config/common.sh@25 -- # waitforlisten 58955 /var/tmp/spdk_tgt.sock 00:04:13.098 15:27:14 -- common/autotest_common.sh@817 -- # '[' -z 58955 ']' 00:04:13.098 15:27:14 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:13.098 15:27:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:13.098 15:27:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:13.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:13.098 15:27:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:13.098 15:27:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:13.098 15:27:14 -- common/autotest_common.sh@10 -- # set +x 00:04:13.098 [2024-04-17 15:27:14.350598] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:04:13.098 [2024-04-17 15:27:14.350773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58955 ] 00:04:13.664 [2024-04-17 15:27:14.870916] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.664 [2024-04-17 15:27:14.985239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.923 15:27:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:13.923 15:27:15 -- common/autotest_common.sh@850 -- # return 0 00:04:13.923 00:04:13.923 15:27:15 -- json_config/common.sh@26 -- # echo '' 00:04:13.923 INFO: shutting down applications... 00:04:13.923 15:27:15 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:13.923 15:27:15 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:13.923 15:27:15 -- json_config/common.sh@31 -- # local app=target 00:04:13.923 15:27:15 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:13.923 15:27:15 -- json_config/common.sh@35 -- # [[ -n 58955 ]] 00:04:13.923 15:27:15 -- json_config/common.sh@38 -- # kill -SIGINT 58955 00:04:13.923 15:27:15 -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:13.923 15:27:15 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:13.923 15:27:15 -- json_config/common.sh@41 -- # kill -0 58955 00:04:13.923 15:27:15 -- json_config/common.sh@45 -- # sleep 0.5 00:04:14.491 15:27:15 -- json_config/common.sh@40 -- # (( i++ )) 00:04:14.491 15:27:15 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:14.491 15:27:15 -- json_config/common.sh@41 -- # kill -0 58955 00:04:14.491 15:27:15 -- json_config/common.sh@45 -- # sleep 0.5 00:04:15.059 15:27:16 -- json_config/common.sh@40 -- # (( i++ )) 00:04:15.059 15:27:16 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:15.059 15:27:16 -- json_config/common.sh@41 -- # kill -0 58955 00:04:15.059 15:27:16 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:15.059 15:27:16 -- json_config/common.sh@43 -- # break 00:04:15.059 15:27:16 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:15.059 SPDK target shutdown done 00:04:15.059 15:27:16 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:15.059 Success 00:04:15.059 15:27:16 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:15.059 00:04:15.059 real 0m2.088s 00:04:15.059 user 0m1.542s 00:04:15.059 sys 0m0.548s 00:04:15.059 15:27:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:15.059 15:27:16 -- common/autotest_common.sh@10 -- # set +x 00:04:15.059 ************************************ 00:04:15.059 END TEST json_config_extra_key 00:04:15.059 ************************************ 00:04:15.059 15:27:16 -- spdk/autotest.sh@169 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:15.059 15:27:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:15.059 15:27:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:15.059 15:27:16 -- common/autotest_common.sh@10 -- # set +x 00:04:15.059 ************************************ 00:04:15.059 START TEST alias_rpc 00:04:15.059 ************************************ 00:04:15.059 15:27:16 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:15.059 * Looking for test storage... 00:04:15.318 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:15.318 15:27:16 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:15.318 15:27:16 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59031 00:04:15.318 15:27:16 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59031 00:04:15.318 15:27:16 -- common/autotest_common.sh@817 -- # '[' -z 59031 ']' 00:04:15.318 15:27:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:15.318 15:27:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:15.318 15:27:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:15.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:15.318 15:27:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:15.318 15:27:16 -- common/autotest_common.sh@10 -- # set +x 00:04:15.318 15:27:16 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:15.318 [2024-04-17 15:27:16.574063] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:04:15.318 [2024-04-17 15:27:16.574203] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59031 ] 00:04:15.318 [2024-04-17 15:27:16.710887] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.576 [2024-04-17 15:27:16.859804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.513 15:27:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:16.513 15:27:17 -- common/autotest_common.sh@850 -- # return 0 00:04:16.513 15:27:17 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:16.513 15:27:17 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59031 00:04:16.513 15:27:17 -- common/autotest_common.sh@936 -- # '[' -z 59031 ']' 00:04:16.513 15:27:17 -- common/autotest_common.sh@940 -- # kill -0 59031 00:04:16.513 15:27:17 -- common/autotest_common.sh@941 -- # uname 00:04:16.513 15:27:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:16.513 15:27:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59031 00:04:16.513 15:27:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:16.513 15:27:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:16.513 killing process with pid 59031 00:04:16.513 15:27:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59031' 00:04:16.513 15:27:17 -- common/autotest_common.sh@955 -- # kill 59031 00:04:16.513 15:27:17 -- common/autotest_common.sh@960 -- # wait 59031 00:04:17.115 00:04:17.115 real 0m2.058s 00:04:17.115 user 0m2.240s 00:04:17.115 sys 0m0.529s 00:04:17.115 15:27:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:17.115 ************************************ 00:04:17.115 END TEST alias_rpc 00:04:17.115 ************************************ 00:04:17.115 15:27:18 -- common/autotest_common.sh@10 -- # set +x 00:04:17.115 15:27:18 -- spdk/autotest.sh@171 -- # [[ 0 -eq 0 ]] 00:04:17.115 15:27:18 -- spdk/autotest.sh@172 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:17.115 15:27:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:17.115 15:27:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:17.115 15:27:18 -- common/autotest_common.sh@10 -- # set +x 00:04:17.374 ************************************ 00:04:17.374 START TEST spdkcli_tcp 00:04:17.374 ************************************ 00:04:17.374 15:27:18 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:17.374 * Looking for test storage... 00:04:17.374 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:17.374 15:27:18 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:17.374 15:27:18 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:17.374 15:27:18 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:17.374 15:27:18 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:17.374 15:27:18 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:17.374 15:27:18 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:17.374 15:27:18 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:17.374 15:27:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:17.374 15:27:18 -- common/autotest_common.sh@10 -- # set +x 00:04:17.374 15:27:18 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59112 00:04:17.374 15:27:18 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:17.374 15:27:18 -- spdkcli/tcp.sh@27 -- # waitforlisten 59112 00:04:17.374 15:27:18 -- common/autotest_common.sh@817 -- # '[' -z 59112 ']' 00:04:17.374 15:27:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:17.374 15:27:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:17.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:17.374 15:27:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:17.374 15:27:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:17.374 15:27:18 -- common/autotest_common.sh@10 -- # set +x 00:04:17.374 [2024-04-17 15:27:18.753827] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:04:17.374 [2024-04-17 15:27:18.753961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59112 ] 00:04:17.632 [2024-04-17 15:27:18.894445] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:17.632 [2024-04-17 15:27:19.021307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:17.632 [2024-04-17 15:27:19.021317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.569 15:27:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:18.569 15:27:19 -- common/autotest_common.sh@850 -- # return 0 00:04:18.569 15:27:19 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:18.569 15:27:19 -- spdkcli/tcp.sh@31 -- # socat_pid=59129 00:04:18.569 15:27:19 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:18.569 [ 00:04:18.569 "bdev_malloc_delete", 00:04:18.569 "bdev_malloc_create", 00:04:18.569 "bdev_null_resize", 00:04:18.569 "bdev_null_delete", 00:04:18.569 "bdev_null_create", 00:04:18.569 "bdev_nvme_cuse_unregister", 00:04:18.569 "bdev_nvme_cuse_register", 00:04:18.569 "bdev_opal_new_user", 00:04:18.569 "bdev_opal_set_lock_state", 00:04:18.569 "bdev_opal_delete", 00:04:18.569 "bdev_opal_get_info", 00:04:18.569 "bdev_opal_create", 00:04:18.569 "bdev_nvme_opal_revert", 00:04:18.569 "bdev_nvme_opal_init", 00:04:18.569 "bdev_nvme_send_cmd", 00:04:18.569 "bdev_nvme_get_path_iostat", 00:04:18.569 "bdev_nvme_get_mdns_discovery_info", 00:04:18.569 "bdev_nvme_stop_mdns_discovery", 00:04:18.569 "bdev_nvme_start_mdns_discovery", 00:04:18.569 "bdev_nvme_set_multipath_policy", 00:04:18.569 "bdev_nvme_set_preferred_path", 00:04:18.569 "bdev_nvme_get_io_paths", 00:04:18.569 "bdev_nvme_remove_error_injection", 00:04:18.569 "bdev_nvme_add_error_injection", 00:04:18.569 "bdev_nvme_get_discovery_info", 00:04:18.569 "bdev_nvme_stop_discovery", 00:04:18.569 "bdev_nvme_start_discovery", 00:04:18.569 "bdev_nvme_get_controller_health_info", 00:04:18.569 "bdev_nvme_disable_controller", 00:04:18.569 "bdev_nvme_enable_controller", 00:04:18.569 "bdev_nvme_reset_controller", 00:04:18.569 "bdev_nvme_get_transport_statistics", 00:04:18.569 "bdev_nvme_apply_firmware", 00:04:18.569 "bdev_nvme_detach_controller", 00:04:18.569 "bdev_nvme_get_controllers", 00:04:18.569 "bdev_nvme_attach_controller", 00:04:18.569 "bdev_nvme_set_hotplug", 00:04:18.569 "bdev_nvme_set_options", 00:04:18.569 "bdev_passthru_delete", 00:04:18.569 "bdev_passthru_create", 00:04:18.569 "bdev_lvol_grow_lvstore", 00:04:18.569 "bdev_lvol_get_lvols", 00:04:18.569 "bdev_lvol_get_lvstores", 00:04:18.569 "bdev_lvol_delete", 00:04:18.569 "bdev_lvol_set_read_only", 00:04:18.569 "bdev_lvol_resize", 00:04:18.569 "bdev_lvol_decouple_parent", 00:04:18.569 "bdev_lvol_inflate", 00:04:18.569 "bdev_lvol_rename", 00:04:18.569 "bdev_lvol_clone_bdev", 00:04:18.569 "bdev_lvol_clone", 00:04:18.569 "bdev_lvol_snapshot", 00:04:18.569 "bdev_lvol_create", 00:04:18.569 "bdev_lvol_delete_lvstore", 00:04:18.569 "bdev_lvol_rename_lvstore", 00:04:18.569 "bdev_lvol_create_lvstore", 00:04:18.569 "bdev_raid_set_options", 00:04:18.569 "bdev_raid_remove_base_bdev", 00:04:18.569 "bdev_raid_add_base_bdev", 00:04:18.569 "bdev_raid_delete", 00:04:18.569 "bdev_raid_create", 00:04:18.569 "bdev_raid_get_bdevs", 00:04:18.569 "bdev_error_inject_error", 00:04:18.569 "bdev_error_delete", 00:04:18.569 "bdev_error_create", 00:04:18.569 "bdev_split_delete", 00:04:18.569 "bdev_split_create", 00:04:18.569 "bdev_delay_delete", 00:04:18.569 "bdev_delay_create", 00:04:18.569 "bdev_delay_update_latency", 00:04:18.569 "bdev_zone_block_delete", 00:04:18.569 "bdev_zone_block_create", 00:04:18.569 "blobfs_create", 00:04:18.569 "blobfs_detect", 00:04:18.569 "blobfs_set_cache_size", 00:04:18.569 "bdev_aio_delete", 00:04:18.569 "bdev_aio_rescan", 00:04:18.569 "bdev_aio_create", 00:04:18.569 "bdev_ftl_set_property", 00:04:18.569 "bdev_ftl_get_properties", 00:04:18.569 "bdev_ftl_get_stats", 00:04:18.569 "bdev_ftl_unmap", 00:04:18.569 "bdev_ftl_unload", 00:04:18.569 "bdev_ftl_delete", 00:04:18.569 "bdev_ftl_load", 00:04:18.569 "bdev_ftl_create", 00:04:18.569 "bdev_virtio_attach_controller", 00:04:18.569 "bdev_virtio_scsi_get_devices", 00:04:18.569 "bdev_virtio_detach_controller", 00:04:18.569 "bdev_virtio_blk_set_hotplug", 00:04:18.569 "bdev_iscsi_delete", 00:04:18.569 "bdev_iscsi_create", 00:04:18.569 "bdev_iscsi_set_options", 00:04:18.569 "bdev_uring_delete", 00:04:18.569 "bdev_uring_rescan", 00:04:18.569 "bdev_uring_create", 00:04:18.569 "accel_error_inject_error", 00:04:18.569 "ioat_scan_accel_module", 00:04:18.569 "dsa_scan_accel_module", 00:04:18.569 "iaa_scan_accel_module", 00:04:18.569 "keyring_file_remove_key", 00:04:18.569 "keyring_file_add_key", 00:04:18.569 "iscsi_set_options", 00:04:18.569 "iscsi_get_auth_groups", 00:04:18.569 "iscsi_auth_group_remove_secret", 00:04:18.569 "iscsi_auth_group_add_secret", 00:04:18.569 "iscsi_delete_auth_group", 00:04:18.569 "iscsi_create_auth_group", 00:04:18.569 "iscsi_set_discovery_auth", 00:04:18.569 "iscsi_get_options", 00:04:18.569 "iscsi_target_node_request_logout", 00:04:18.569 "iscsi_target_node_set_redirect", 00:04:18.569 "iscsi_target_node_set_auth", 00:04:18.569 "iscsi_target_node_add_lun", 00:04:18.569 "iscsi_get_stats", 00:04:18.569 "iscsi_get_connections", 00:04:18.569 "iscsi_portal_group_set_auth", 00:04:18.569 "iscsi_start_portal_group", 00:04:18.569 "iscsi_delete_portal_group", 00:04:18.569 "iscsi_create_portal_group", 00:04:18.569 "iscsi_get_portal_groups", 00:04:18.569 "iscsi_delete_target_node", 00:04:18.569 "iscsi_target_node_remove_pg_ig_maps", 00:04:18.569 "iscsi_target_node_add_pg_ig_maps", 00:04:18.569 "iscsi_create_target_node", 00:04:18.569 "iscsi_get_target_nodes", 00:04:18.569 "iscsi_delete_initiator_group", 00:04:18.569 "iscsi_initiator_group_remove_initiators", 00:04:18.569 "iscsi_initiator_group_add_initiators", 00:04:18.569 "iscsi_create_initiator_group", 00:04:18.569 "iscsi_get_initiator_groups", 00:04:18.569 "nvmf_set_crdt", 00:04:18.569 "nvmf_set_config", 00:04:18.569 "nvmf_set_max_subsystems", 00:04:18.569 "nvmf_subsystem_get_listeners", 00:04:18.569 "nvmf_subsystem_get_qpairs", 00:04:18.569 "nvmf_subsystem_get_controllers", 00:04:18.569 "nvmf_get_stats", 00:04:18.569 "nvmf_get_transports", 00:04:18.569 "nvmf_create_transport", 00:04:18.569 "nvmf_get_targets", 00:04:18.569 "nvmf_delete_target", 00:04:18.569 "nvmf_create_target", 00:04:18.569 "nvmf_subsystem_allow_any_host", 00:04:18.569 "nvmf_subsystem_remove_host", 00:04:18.569 "nvmf_subsystem_add_host", 00:04:18.569 "nvmf_ns_remove_host", 00:04:18.569 "nvmf_ns_add_host", 00:04:18.569 "nvmf_subsystem_remove_ns", 00:04:18.569 "nvmf_subsystem_add_ns", 00:04:18.569 "nvmf_subsystem_listener_set_ana_state", 00:04:18.569 "nvmf_discovery_get_referrals", 00:04:18.569 "nvmf_discovery_remove_referral", 00:04:18.569 "nvmf_discovery_add_referral", 00:04:18.569 "nvmf_subsystem_remove_listener", 00:04:18.569 "nvmf_subsystem_add_listener", 00:04:18.569 "nvmf_delete_subsystem", 00:04:18.569 "nvmf_create_subsystem", 00:04:18.569 "nvmf_get_subsystems", 00:04:18.569 "env_dpdk_get_mem_stats", 00:04:18.569 "nbd_get_disks", 00:04:18.569 "nbd_stop_disk", 00:04:18.569 "nbd_start_disk", 00:04:18.569 "ublk_recover_disk", 00:04:18.569 "ublk_get_disks", 00:04:18.569 "ublk_stop_disk", 00:04:18.569 "ublk_start_disk", 00:04:18.569 "ublk_destroy_target", 00:04:18.569 "ublk_create_target", 00:04:18.569 "virtio_blk_create_transport", 00:04:18.569 "virtio_blk_get_transports", 00:04:18.569 "vhost_controller_set_coalescing", 00:04:18.569 "vhost_get_controllers", 00:04:18.569 "vhost_delete_controller", 00:04:18.569 "vhost_create_blk_controller", 00:04:18.569 "vhost_scsi_controller_remove_target", 00:04:18.569 "vhost_scsi_controller_add_target", 00:04:18.569 "vhost_start_scsi_controller", 00:04:18.570 "vhost_create_scsi_controller", 00:04:18.570 "thread_set_cpumask", 00:04:18.570 "framework_get_scheduler", 00:04:18.570 "framework_set_scheduler", 00:04:18.570 "framework_get_reactors", 00:04:18.570 "thread_get_io_channels", 00:04:18.570 "thread_get_pollers", 00:04:18.570 "thread_get_stats", 00:04:18.570 "framework_monitor_context_switch", 00:04:18.570 "spdk_kill_instance", 00:04:18.570 "log_enable_timestamps", 00:04:18.570 "log_get_flags", 00:04:18.570 "log_clear_flag", 00:04:18.570 "log_set_flag", 00:04:18.570 "log_get_level", 00:04:18.570 "log_set_level", 00:04:18.570 "log_get_print_level", 00:04:18.570 "log_set_print_level", 00:04:18.570 "framework_enable_cpumask_locks", 00:04:18.570 "framework_disable_cpumask_locks", 00:04:18.570 "framework_wait_init", 00:04:18.570 "framework_start_init", 00:04:18.570 "scsi_get_devices", 00:04:18.570 "bdev_get_histogram", 00:04:18.570 "bdev_enable_histogram", 00:04:18.570 "bdev_set_qos_limit", 00:04:18.570 "bdev_set_qd_sampling_period", 00:04:18.570 "bdev_get_bdevs", 00:04:18.570 "bdev_reset_iostat", 00:04:18.570 "bdev_get_iostat", 00:04:18.570 "bdev_examine", 00:04:18.570 "bdev_wait_for_examine", 00:04:18.570 "bdev_set_options", 00:04:18.570 "notify_get_notifications", 00:04:18.570 "notify_get_types", 00:04:18.570 "accel_get_stats", 00:04:18.570 "accel_set_options", 00:04:18.570 "accel_set_driver", 00:04:18.570 "accel_crypto_key_destroy", 00:04:18.570 "accel_crypto_keys_get", 00:04:18.570 "accel_crypto_key_create", 00:04:18.570 "accel_assign_opc", 00:04:18.570 "accel_get_module_info", 00:04:18.570 "accel_get_opc_assignments", 00:04:18.570 "vmd_rescan", 00:04:18.570 "vmd_remove_device", 00:04:18.570 "vmd_enable", 00:04:18.570 "sock_set_default_impl", 00:04:18.570 "sock_impl_set_options", 00:04:18.570 "sock_impl_get_options", 00:04:18.570 "iobuf_get_stats", 00:04:18.570 "iobuf_set_options", 00:04:18.570 "framework_get_pci_devices", 00:04:18.570 "framework_get_config", 00:04:18.570 "framework_get_subsystems", 00:04:18.570 "trace_get_info", 00:04:18.570 "trace_get_tpoint_group_mask", 00:04:18.570 "trace_disable_tpoint_group", 00:04:18.570 "trace_enable_tpoint_group", 00:04:18.570 "trace_clear_tpoint_mask", 00:04:18.570 "trace_set_tpoint_mask", 00:04:18.570 "keyring_get_keys", 00:04:18.570 "spdk_get_version", 00:04:18.570 "rpc_get_methods" 00:04:18.570 ] 00:04:18.570 15:27:19 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:18.570 15:27:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:18.570 15:27:19 -- common/autotest_common.sh@10 -- # set +x 00:04:18.570 15:27:19 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:18.570 15:27:19 -- spdkcli/tcp.sh@38 -- # killprocess 59112 00:04:18.570 15:27:19 -- common/autotest_common.sh@936 -- # '[' -z 59112 ']' 00:04:18.570 15:27:19 -- common/autotest_common.sh@940 -- # kill -0 59112 00:04:18.570 15:27:19 -- common/autotest_common.sh@941 -- # uname 00:04:18.570 15:27:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:18.570 15:27:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59112 00:04:18.828 15:27:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:18.828 15:27:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:18.828 killing process with pid 59112 00:04:18.828 15:27:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59112' 00:04:18.828 15:27:20 -- common/autotest_common.sh@955 -- # kill 59112 00:04:18.828 15:27:20 -- common/autotest_common.sh@960 -- # wait 59112 00:04:19.394 00:04:19.394 real 0m2.029s 00:04:19.394 user 0m3.579s 00:04:19.394 sys 0m0.578s 00:04:19.394 15:27:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:19.394 15:27:20 -- common/autotest_common.sh@10 -- # set +x 00:04:19.394 ************************************ 00:04:19.394 END TEST spdkcli_tcp 00:04:19.394 ************************************ 00:04:19.394 15:27:20 -- spdk/autotest.sh@175 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:19.394 15:27:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:19.394 15:27:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:19.394 15:27:20 -- common/autotest_common.sh@10 -- # set +x 00:04:19.394 ************************************ 00:04:19.394 START TEST dpdk_mem_utility 00:04:19.394 ************************************ 00:04:19.394 15:27:20 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:19.394 * Looking for test storage... 00:04:19.394 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:19.394 15:27:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:19.394 15:27:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59208 00:04:19.394 15:27:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:19.394 15:27:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59208 00:04:19.394 15:27:20 -- common/autotest_common.sh@817 -- # '[' -z 59208 ']' 00:04:19.394 15:27:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:19.394 15:27:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:19.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:19.394 15:27:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:19.394 15:27:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:19.394 15:27:20 -- common/autotest_common.sh@10 -- # set +x 00:04:19.653 [2024-04-17 15:27:20.876168] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:04:19.653 [2024-04-17 15:27:20.876289] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59208 ] 00:04:19.653 [2024-04-17 15:27:21.015009] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.912 [2024-04-17 15:27:21.154369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.481 15:27:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:20.481 15:27:21 -- common/autotest_common.sh@850 -- # return 0 00:04:20.481 15:27:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:20.481 15:27:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:20.481 15:27:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:20.481 15:27:21 -- common/autotest_common.sh@10 -- # set +x 00:04:20.481 { 00:04:20.481 "filename": "/tmp/spdk_mem_dump.txt" 00:04:20.481 } 00:04:20.481 15:27:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:20.481 15:27:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:20.481 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:20.481 1 heaps totaling size 814.000000 MiB 00:04:20.481 size: 814.000000 MiB heap id: 0 00:04:20.481 end heaps---------- 00:04:20.481 8 mempools totaling size 598.116089 MiB 00:04:20.481 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:20.481 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:20.481 size: 84.521057 MiB name: bdev_io_59208 00:04:20.481 size: 51.011292 MiB name: evtpool_59208 00:04:20.481 size: 50.003479 MiB name: msgpool_59208 00:04:20.481 size: 21.763794 MiB name: PDU_Pool 00:04:20.481 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:20.481 size: 0.026123 MiB name: Session_Pool 00:04:20.481 end mempools------- 00:04:20.481 6 memzones totaling size 4.142822 MiB 00:04:20.481 size: 1.000366 MiB name: RG_ring_0_59208 00:04:20.481 size: 1.000366 MiB name: RG_ring_1_59208 00:04:20.481 size: 1.000366 MiB name: RG_ring_4_59208 00:04:20.481 size: 1.000366 MiB name: RG_ring_5_59208 00:04:20.481 size: 0.125366 MiB name: RG_ring_2_59208 00:04:20.481 size: 0.015991 MiB name: RG_ring_3_59208 00:04:20.481 end memzones------- 00:04:20.481 15:27:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:20.481 heap id: 0 total size: 814.000000 MiB number of busy elements: 309 number of free elements: 15 00:04:20.481 list of free elements. size: 12.470276 MiB 00:04:20.481 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:20.481 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:20.481 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:20.481 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:20.481 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:20.481 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:20.481 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:20.481 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:20.481 element at address: 0x200000200000 with size: 0.832825 MiB 00:04:20.481 element at address: 0x20001aa00000 with size: 0.568054 MiB 00:04:20.481 element at address: 0x20000b200000 with size: 0.488892 MiB 00:04:20.481 element at address: 0x200000800000 with size: 0.486145 MiB 00:04:20.481 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:20.481 element at address: 0x200027e00000 with size: 0.395752 MiB 00:04:20.481 element at address: 0x200003a00000 with size: 0.347839 MiB 00:04:20.481 list of standard malloc elements. size: 199.267151 MiB 00:04:20.481 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:20.481 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:20.481 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:20.481 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:20.481 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:20.481 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:20.481 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:20.481 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:20.481 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:20.481 element at address: 0x2000002d5340 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d5400 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:20.481 element at address: 0x20000087c740 with size: 0.000183 MiB 00:04:20.481 element at address: 0x20000087c800 with size: 0.000183 MiB 00:04:20.481 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:04:20.481 element at address: 0x20000087c980 with size: 0.000183 MiB 00:04:20.481 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:04:20.481 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:04:20.481 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:04:20.481 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:04:20.481 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:04:20.481 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:20.481 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:20.481 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:20.481 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:04:20.481 element at address: 0x200003a59180 with size: 0.000183 MiB 00:04:20.481 element at address: 0x200003a59240 with size: 0.000183 MiB 00:04:20.481 element at address: 0x200003a59300 with size: 0.000183 MiB 00:04:20.481 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:04:20.481 element at address: 0x200003a59480 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200003a59540 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200003a59600 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200003a59780 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200003a59840 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200003a59900 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:20.482 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:20.482 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:20.482 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:20.482 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:20.482 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa916c0 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa91780 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:20.482 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200027e65500 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:04:20.482 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:20.483 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:20.483 list of memzone associated elements. size: 602.262573 MiB 00:04:20.483 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:20.483 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:20.483 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:20.483 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:20.483 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:20.483 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_59208_0 00:04:20.483 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:20.483 associated memzone info: size: 48.002930 MiB name: MP_evtpool_59208_0 00:04:20.483 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:20.483 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59208_0 00:04:20.483 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:20.483 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:20.483 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:20.483 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:20.483 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:20.483 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_59208 00:04:20.483 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:20.483 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59208 00:04:20.483 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:20.483 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59208 00:04:20.483 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:20.483 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:20.483 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:20.483 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:20.483 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:20.483 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:20.483 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:20.483 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:20.483 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:20.483 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59208 00:04:20.483 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:20.483 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59208 00:04:20.483 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:20.483 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59208 00:04:20.483 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:20.483 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59208 00:04:20.483 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:20.483 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59208 00:04:20.483 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:20.483 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:20.483 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:20.483 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:20.483 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:20.483 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:20.483 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:20.483 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59208 00:04:20.483 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:20.483 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:20.483 element at address: 0x200027e65680 with size: 0.023743 MiB 00:04:20.483 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:20.483 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:20.483 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59208 00:04:20.483 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:04:20.483 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:20.483 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:04:20.483 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59208 00:04:20.483 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:20.483 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59208 00:04:20.483 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:04:20.483 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:20.742 15:27:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:20.742 15:27:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59208 00:04:20.742 15:27:21 -- common/autotest_common.sh@936 -- # '[' -z 59208 ']' 00:04:20.742 15:27:21 -- common/autotest_common.sh@940 -- # kill -0 59208 00:04:20.742 15:27:21 -- common/autotest_common.sh@941 -- # uname 00:04:20.742 15:27:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:20.742 15:27:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59208 00:04:20.742 15:27:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:20.742 killing process with pid 59208 00:04:20.742 15:27:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:20.742 15:27:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59208' 00:04:20.742 15:27:21 -- common/autotest_common.sh@955 -- # kill 59208 00:04:20.742 15:27:21 -- common/autotest_common.sh@960 -- # wait 59208 00:04:21.309 00:04:21.309 real 0m1.797s 00:04:21.309 user 0m1.816s 00:04:21.309 sys 0m0.481s 00:04:21.309 15:27:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:21.309 15:27:22 -- common/autotest_common.sh@10 -- # set +x 00:04:21.309 ************************************ 00:04:21.309 END TEST dpdk_mem_utility 00:04:21.309 ************************************ 00:04:21.309 15:27:22 -- spdk/autotest.sh@176 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:21.309 15:27:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:21.309 15:27:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:21.309 15:27:22 -- common/autotest_common.sh@10 -- # set +x 00:04:21.309 ************************************ 00:04:21.309 START TEST event 00:04:21.309 ************************************ 00:04:21.309 15:27:22 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:21.309 * Looking for test storage... 00:04:21.309 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:21.309 15:27:22 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:21.309 15:27:22 -- bdev/nbd_common.sh@6 -- # set -e 00:04:21.309 15:27:22 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:21.309 15:27:22 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:04:21.309 15:27:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:21.309 15:27:22 -- common/autotest_common.sh@10 -- # set +x 00:04:21.567 ************************************ 00:04:21.567 START TEST event_perf 00:04:21.567 ************************************ 00:04:21.567 15:27:22 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:21.567 Running I/O for 1 seconds...[2024-04-17 15:27:22.828302] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:04:21.567 [2024-04-17 15:27:22.828388] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59294 ] 00:04:21.567 [2024-04-17 15:27:22.968520] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:21.826 [2024-04-17 15:27:23.101695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:21.826 [2024-04-17 15:27:23.101848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:21.826 [2024-04-17 15:27:23.101953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:21.826 Running I/O for 1 seconds...[2024-04-17 15:27:23.101957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.202 00:04:23.202 lcore 0: 197890 00:04:23.202 lcore 1: 197888 00:04:23.202 lcore 2: 197889 00:04:23.202 lcore 3: 197889 00:04:23.202 done. 00:04:23.202 00:04:23.202 real 0m1.458s 00:04:23.202 user 0m4.260s 00:04:23.202 sys 0m0.078s 00:04:23.202 15:27:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:23.202 15:27:24 -- common/autotest_common.sh@10 -- # set +x 00:04:23.202 ************************************ 00:04:23.202 END TEST event_perf 00:04:23.202 ************************************ 00:04:23.202 15:27:24 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:23.202 15:27:24 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:04:23.202 15:27:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:23.202 15:27:24 -- common/autotest_common.sh@10 -- # set +x 00:04:23.202 ************************************ 00:04:23.202 START TEST event_reactor 00:04:23.202 ************************************ 00:04:23.202 15:27:24 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:23.202 [2024-04-17 15:27:24.389404] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:04:23.202 [2024-04-17 15:27:24.389484] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59331 ] 00:04:23.202 [2024-04-17 15:27:24.522108] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.460 [2024-04-17 15:27:24.656720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.396 test_start 00:04:24.396 oneshot 00:04:24.396 tick 100 00:04:24.396 tick 100 00:04:24.396 tick 250 00:04:24.396 tick 100 00:04:24.396 tick 100 00:04:24.396 tick 100 00:04:24.396 tick 250 00:04:24.396 tick 500 00:04:24.396 tick 100 00:04:24.396 tick 100 00:04:24.396 tick 250 00:04:24.396 tick 100 00:04:24.396 tick 100 00:04:24.396 test_end 00:04:24.396 00:04:24.396 real 0m1.441s 00:04:24.396 user 0m1.271s 00:04:24.396 sys 0m0.063s 00:04:24.396 15:27:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:24.396 15:27:25 -- common/autotest_common.sh@10 -- # set +x 00:04:24.396 ************************************ 00:04:24.396 END TEST event_reactor 00:04:24.396 ************************************ 00:04:24.665 15:27:25 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:24.665 15:27:25 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:04:24.665 15:27:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:24.665 15:27:25 -- common/autotest_common.sh@10 -- # set +x 00:04:24.665 ************************************ 00:04:24.665 START TEST event_reactor_perf 00:04:24.665 ************************************ 00:04:24.665 15:27:25 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:24.665 [2024-04-17 15:27:25.946693] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:04:24.665 [2024-04-17 15:27:25.946799] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59376 ] 00:04:24.665 [2024-04-17 15:27:26.086198] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.939 [2024-04-17 15:27:26.197454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.312 test_start 00:04:26.312 test_end 00:04:26.312 Performance: 373725 events per second 00:04:26.312 00:04:26.312 real 0m1.416s 00:04:26.312 user 0m1.240s 00:04:26.312 sys 0m0.069s 00:04:26.312 15:27:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:26.312 ************************************ 00:04:26.312 END TEST event_reactor_perf 00:04:26.312 ************************************ 00:04:26.312 15:27:27 -- common/autotest_common.sh@10 -- # set +x 00:04:26.312 15:27:27 -- event/event.sh@49 -- # uname -s 00:04:26.312 15:27:27 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:26.312 15:27:27 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:26.312 15:27:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:26.312 15:27:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:26.312 15:27:27 -- common/autotest_common.sh@10 -- # set +x 00:04:26.312 ************************************ 00:04:26.312 START TEST event_scheduler 00:04:26.312 ************************************ 00:04:26.312 15:27:27 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:26.312 * Looking for test storage... 00:04:26.312 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:26.312 15:27:27 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:26.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.312 15:27:27 -- scheduler/scheduler.sh@35 -- # scheduler_pid=59443 00:04:26.312 15:27:27 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:26.312 15:27:27 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:26.312 15:27:27 -- scheduler/scheduler.sh@37 -- # waitforlisten 59443 00:04:26.312 15:27:27 -- common/autotest_common.sh@817 -- # '[' -z 59443 ']' 00:04:26.312 15:27:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.312 15:27:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:26.312 15:27:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.312 15:27:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:26.312 15:27:27 -- common/autotest_common.sh@10 -- # set +x 00:04:26.312 [2024-04-17 15:27:27.607941] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:04:26.312 [2024-04-17 15:27:27.608051] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59443 ] 00:04:26.312 [2024-04-17 15:27:27.752454] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:26.570 [2024-04-17 15:27:27.901597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.570 [2024-04-17 15:27:27.901737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:26.570 [2024-04-17 15:27:27.901905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:26.570 [2024-04-17 15:27:27.901913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:27.504 15:27:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:27.504 15:27:28 -- common/autotest_common.sh@850 -- # return 0 00:04:27.504 15:27:28 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:27.504 15:27:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:27.504 15:27:28 -- common/autotest_common.sh@10 -- # set +x 00:04:27.504 POWER: Env isn't set yet! 00:04:27.504 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:27.504 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:27.504 POWER: Cannot set governor of lcore 0 to userspace 00:04:27.504 POWER: Attempting to initialise PSTAT power management... 00:04:27.504 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:27.504 POWER: Cannot set governor of lcore 0 to performance 00:04:27.504 POWER: Attempting to initialise AMD PSTATE power management... 00:04:27.504 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:27.504 POWER: Cannot set governor of lcore 0 to userspace 00:04:27.504 POWER: Attempting to initialise CPPC power management... 00:04:27.504 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:27.504 POWER: Cannot set governor of lcore 0 to userspace 00:04:27.504 POWER: Attempting to initialise VM power management... 00:04:27.504 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:27.504 POWER: Unable to set Power Management Environment for lcore 0 00:04:27.504 [2024-04-17 15:27:28.636274] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:04:27.504 [2024-04-17 15:27:28.636290] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:04:27.504 [2024-04-17 15:27:28.636299] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:04:27.504 15:27:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:27.504 15:27:28 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:27.504 15:27:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:27.504 15:27:28 -- common/autotest_common.sh@10 -- # set +x 00:04:27.504 [2024-04-17 15:27:28.765465] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:27.504 15:27:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:27.504 15:27:28 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:27.504 15:27:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:27.504 15:27:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:27.504 15:27:28 -- common/autotest_common.sh@10 -- # set +x 00:04:27.504 ************************************ 00:04:27.504 START TEST scheduler_create_thread 00:04:27.504 ************************************ 00:04:27.504 15:27:28 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:04:27.504 15:27:28 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:27.504 15:27:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:27.504 15:27:28 -- common/autotest_common.sh@10 -- # set +x 00:04:27.504 2 00:04:27.504 15:27:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:27.504 15:27:28 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:27.505 15:27:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:27.505 15:27:28 -- common/autotest_common.sh@10 -- # set +x 00:04:27.505 3 00:04:27.505 15:27:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:27.505 15:27:28 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:27.505 15:27:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:27.505 15:27:28 -- common/autotest_common.sh@10 -- # set +x 00:04:27.505 4 00:04:27.505 15:27:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:27.505 15:27:28 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:27.505 15:27:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:27.505 15:27:28 -- common/autotest_common.sh@10 -- # set +x 00:04:27.505 5 00:04:27.505 15:27:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:27.505 15:27:28 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:27.505 15:27:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:27.505 15:27:28 -- common/autotest_common.sh@10 -- # set +x 00:04:27.505 6 00:04:27.505 15:27:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:27.505 15:27:28 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:27.505 15:27:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:27.505 15:27:28 -- common/autotest_common.sh@10 -- # set +x 00:04:27.505 7 00:04:27.505 15:27:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:27.505 15:27:28 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:27.505 15:27:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:27.505 15:27:28 -- common/autotest_common.sh@10 -- # set +x 00:04:27.505 8 00:04:27.505 15:27:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:27.505 15:27:28 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:27.505 15:27:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:27.505 15:27:28 -- common/autotest_common.sh@10 -- # set +x 00:04:27.505 9 00:04:27.505 15:27:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:27.505 15:27:28 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:27.505 15:27:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:27.505 15:27:28 -- common/autotest_common.sh@10 -- # set +x 00:04:27.505 10 00:04:27.505 15:27:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:27.505 15:27:28 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:27.505 15:27:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:27.505 15:27:28 -- common/autotest_common.sh@10 -- # set +x 00:04:27.505 15:27:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:27.505 15:27:28 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:27.505 15:27:28 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:27.505 15:27:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:27.505 15:27:28 -- common/autotest_common.sh@10 -- # set +x 00:04:27.763 15:27:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:27.763 15:27:28 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:27.763 15:27:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:27.763 15:27:28 -- common/autotest_common.sh@10 -- # set +x 00:04:29.135 15:27:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:29.135 15:27:30 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:29.135 15:27:30 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:29.135 15:27:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:29.135 15:27:30 -- common/autotest_common.sh@10 -- # set +x 00:04:30.068 ************************************ 00:04:30.068 END TEST scheduler_create_thread 00:04:30.068 ************************************ 00:04:30.068 15:27:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:30.068 00:04:30.068 real 0m2.613s 00:04:30.068 user 0m0.019s 00:04:30.068 sys 0m0.005s 00:04:30.068 15:27:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:30.068 15:27:31 -- common/autotest_common.sh@10 -- # set +x 00:04:30.068 15:27:31 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:30.068 15:27:31 -- scheduler/scheduler.sh@46 -- # killprocess 59443 00:04:30.068 15:27:31 -- common/autotest_common.sh@936 -- # '[' -z 59443 ']' 00:04:30.068 15:27:31 -- common/autotest_common.sh@940 -- # kill -0 59443 00:04:30.068 15:27:31 -- common/autotest_common.sh@941 -- # uname 00:04:30.068 15:27:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:30.068 15:27:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59443 00:04:30.326 killing process with pid 59443 00:04:30.326 15:27:31 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:04:30.326 15:27:31 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:04:30.326 15:27:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59443' 00:04:30.326 15:27:31 -- common/autotest_common.sh@955 -- # kill 59443 00:04:30.326 15:27:31 -- common/autotest_common.sh@960 -- # wait 59443 00:04:30.585 [2024-04-17 15:27:31.938432] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:31.164 ************************************ 00:04:31.164 END TEST event_scheduler 00:04:31.164 ************************************ 00:04:31.164 00:04:31.164 real 0m4.845s 00:04:31.164 user 0m8.982s 00:04:31.164 sys 0m0.460s 00:04:31.164 15:27:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:31.164 15:27:32 -- common/autotest_common.sh@10 -- # set +x 00:04:31.164 15:27:32 -- event/event.sh@51 -- # modprobe -n nbd 00:04:31.164 15:27:32 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:31.164 15:27:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:31.164 15:27:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:31.164 15:27:32 -- common/autotest_common.sh@10 -- # set +x 00:04:31.164 ************************************ 00:04:31.164 START TEST app_repeat 00:04:31.164 ************************************ 00:04:31.164 15:27:32 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:04:31.164 15:27:32 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.164 15:27:32 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.164 15:27:32 -- event/event.sh@13 -- # local nbd_list 00:04:31.164 15:27:32 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:31.164 15:27:32 -- event/event.sh@14 -- # local bdev_list 00:04:31.164 15:27:32 -- event/event.sh@15 -- # local repeat_times=4 00:04:31.164 15:27:32 -- event/event.sh@17 -- # modprobe nbd 00:04:31.164 Process app_repeat pid: 59551 00:04:31.164 15:27:32 -- event/event.sh@19 -- # repeat_pid=59551 00:04:31.164 15:27:32 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:31.164 15:27:32 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:31.164 15:27:32 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59551' 00:04:31.164 15:27:32 -- event/event.sh@23 -- # for i in {0..2} 00:04:31.164 spdk_app_start Round 0 00:04:31.164 15:27:32 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:31.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:31.164 15:27:32 -- event/event.sh@25 -- # waitforlisten 59551 /var/tmp/spdk-nbd.sock 00:04:31.164 15:27:32 -- common/autotest_common.sh@817 -- # '[' -z 59551 ']' 00:04:31.164 15:27:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:31.164 15:27:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:31.164 15:27:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:31.164 15:27:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:31.164 15:27:32 -- common/autotest_common.sh@10 -- # set +x 00:04:31.164 [2024-04-17 15:27:32.480192] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:04:31.164 [2024-04-17 15:27:32.480287] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59551 ] 00:04:31.422 [2024-04-17 15:27:32.615840] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:31.422 [2024-04-17 15:27:32.764129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:31.422 [2024-04-17 15:27:32.764138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.357 15:27:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:32.357 15:27:33 -- common/autotest_common.sh@850 -- # return 0 00:04:32.357 15:27:33 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:32.357 Malloc0 00:04:32.357 15:27:33 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:32.926 Malloc1 00:04:32.926 15:27:34 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:32.926 15:27:34 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.926 15:27:34 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:32.926 15:27:34 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:32.926 15:27:34 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:32.926 15:27:34 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:32.926 15:27:34 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:32.926 15:27:34 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.927 15:27:34 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:32.927 15:27:34 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:32.927 15:27:34 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:32.927 15:27:34 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:32.927 15:27:34 -- bdev/nbd_common.sh@12 -- # local i 00:04:32.927 15:27:34 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:32.927 15:27:34 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:32.927 15:27:34 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:33.186 /dev/nbd0 00:04:33.186 15:27:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:33.186 15:27:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:33.186 15:27:34 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:04:33.186 15:27:34 -- common/autotest_common.sh@855 -- # local i 00:04:33.186 15:27:34 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:33.186 15:27:34 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:33.186 15:27:34 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:04:33.186 15:27:34 -- common/autotest_common.sh@859 -- # break 00:04:33.186 15:27:34 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:33.186 15:27:34 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:33.186 15:27:34 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:33.186 1+0 records in 00:04:33.186 1+0 records out 00:04:33.186 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000408945 s, 10.0 MB/s 00:04:33.186 15:27:34 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:33.186 15:27:34 -- common/autotest_common.sh@872 -- # size=4096 00:04:33.186 15:27:34 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:33.186 15:27:34 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:33.186 15:27:34 -- common/autotest_common.sh@875 -- # return 0 00:04:33.186 15:27:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:33.186 15:27:34 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:33.186 15:27:34 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:33.444 /dev/nbd1 00:04:33.444 15:27:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:33.444 15:27:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:33.445 15:27:34 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:04:33.445 15:27:34 -- common/autotest_common.sh@855 -- # local i 00:04:33.445 15:27:34 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:33.445 15:27:34 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:33.445 15:27:34 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:04:33.445 15:27:34 -- common/autotest_common.sh@859 -- # break 00:04:33.445 15:27:34 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:33.445 15:27:34 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:33.445 15:27:34 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:33.445 1+0 records in 00:04:33.445 1+0 records out 00:04:33.445 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000221905 s, 18.5 MB/s 00:04:33.445 15:27:34 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:33.445 15:27:34 -- common/autotest_common.sh@872 -- # size=4096 00:04:33.445 15:27:34 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:33.445 15:27:34 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:33.445 15:27:34 -- common/autotest_common.sh@875 -- # return 0 00:04:33.445 15:27:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:33.445 15:27:34 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:33.445 15:27:34 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:33.445 15:27:34 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:33.445 15:27:34 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:33.703 15:27:34 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:33.703 { 00:04:33.703 "nbd_device": "/dev/nbd0", 00:04:33.703 "bdev_name": "Malloc0" 00:04:33.703 }, 00:04:33.703 { 00:04:33.703 "nbd_device": "/dev/nbd1", 00:04:33.704 "bdev_name": "Malloc1" 00:04:33.704 } 00:04:33.704 ]' 00:04:33.704 15:27:34 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:33.704 { 00:04:33.704 "nbd_device": "/dev/nbd0", 00:04:33.704 "bdev_name": "Malloc0" 00:04:33.704 }, 00:04:33.704 { 00:04:33.704 "nbd_device": "/dev/nbd1", 00:04:33.704 "bdev_name": "Malloc1" 00:04:33.704 } 00:04:33.704 ]' 00:04:33.704 15:27:34 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:33.704 15:27:35 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:33.704 /dev/nbd1' 00:04:33.704 15:27:35 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:33.704 15:27:35 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:33.704 /dev/nbd1' 00:04:33.704 15:27:35 -- bdev/nbd_common.sh@65 -- # count=2 00:04:33.704 15:27:35 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:33.704 15:27:35 -- bdev/nbd_common.sh@95 -- # count=2 00:04:33.704 15:27:35 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:33.704 15:27:35 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:33.704 15:27:35 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:33.704 15:27:35 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:33.704 15:27:35 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:33.704 15:27:35 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:33.704 15:27:35 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:33.704 15:27:35 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:33.704 256+0 records in 00:04:33.704 256+0 records out 00:04:33.704 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00832263 s, 126 MB/s 00:04:33.704 15:27:35 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:33.704 15:27:35 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:33.704 256+0 records in 00:04:33.704 256+0 records out 00:04:33.704 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0255109 s, 41.1 MB/s 00:04:33.704 15:27:35 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:33.704 15:27:35 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:33.704 256+0 records in 00:04:33.704 256+0 records out 00:04:33.704 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024535 s, 42.7 MB/s 00:04:33.704 15:27:35 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:33.704 15:27:35 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:33.704 15:27:35 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:33.704 15:27:35 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:33.704 15:27:35 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:33.704 15:27:35 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:33.704 15:27:35 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:33.704 15:27:35 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:33.704 15:27:35 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:33.704 15:27:35 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:33.704 15:27:35 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:33.963 15:27:35 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:33.963 15:27:35 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:33.963 15:27:35 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:33.963 15:27:35 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:33.963 15:27:35 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:33.963 15:27:35 -- bdev/nbd_common.sh@51 -- # local i 00:04:33.963 15:27:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:33.963 15:27:35 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:33.963 15:27:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:33.963 15:27:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:33.963 15:27:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:33.963 15:27:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:33.963 15:27:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:33.963 15:27:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:34.221 15:27:35 -- bdev/nbd_common.sh@41 -- # break 00:04:34.221 15:27:35 -- bdev/nbd_common.sh@45 -- # return 0 00:04:34.221 15:27:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:34.221 15:27:35 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:34.221 15:27:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:34.221 15:27:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:34.221 15:27:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:34.221 15:27:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:34.221 15:27:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:34.221 15:27:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:34.221 15:27:35 -- bdev/nbd_common.sh@41 -- # break 00:04:34.221 15:27:35 -- bdev/nbd_common.sh@45 -- # return 0 00:04:34.221 15:27:35 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:34.221 15:27:35 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.221 15:27:35 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:34.787 15:27:35 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:34.787 15:27:35 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:34.787 15:27:35 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:34.787 15:27:35 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:34.787 15:27:35 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:34.787 15:27:35 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:34.787 15:27:35 -- bdev/nbd_common.sh@65 -- # true 00:04:34.787 15:27:35 -- bdev/nbd_common.sh@65 -- # count=0 00:04:34.787 15:27:35 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:34.787 15:27:35 -- bdev/nbd_common.sh@104 -- # count=0 00:04:34.787 15:27:35 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:34.787 15:27:35 -- bdev/nbd_common.sh@109 -- # return 0 00:04:34.787 15:27:35 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:35.046 15:27:36 -- event/event.sh@35 -- # sleep 3 00:04:35.304 [2024-04-17 15:27:36.625201] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:35.563 [2024-04-17 15:27:36.774187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:35.563 [2024-04-17 15:27:36.774198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.563 [2024-04-17 15:27:36.852530] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:35.563 [2024-04-17 15:27:36.852613] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:38.094 spdk_app_start Round 1 00:04:38.094 15:27:39 -- event/event.sh@23 -- # for i in {0..2} 00:04:38.094 15:27:39 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:38.094 15:27:39 -- event/event.sh@25 -- # waitforlisten 59551 /var/tmp/spdk-nbd.sock 00:04:38.094 15:27:39 -- common/autotest_common.sh@817 -- # '[' -z 59551 ']' 00:04:38.094 15:27:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:38.094 15:27:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:38.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:38.094 15:27:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:38.094 15:27:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:38.094 15:27:39 -- common/autotest_common.sh@10 -- # set +x 00:04:38.352 15:27:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:38.352 15:27:39 -- common/autotest_common.sh@850 -- # return 0 00:04:38.352 15:27:39 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:38.611 Malloc0 00:04:38.611 15:27:39 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:38.870 Malloc1 00:04:38.870 15:27:40 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:38.870 15:27:40 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:38.870 15:27:40 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:38.870 15:27:40 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:38.870 15:27:40 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.870 15:27:40 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:38.870 15:27:40 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:38.870 15:27:40 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:38.870 15:27:40 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:38.870 15:27:40 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:38.870 15:27:40 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.870 15:27:40 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:38.870 15:27:40 -- bdev/nbd_common.sh@12 -- # local i 00:04:38.870 15:27:40 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:38.870 15:27:40 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:38.870 15:27:40 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:39.129 /dev/nbd0 00:04:39.129 15:27:40 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:39.129 15:27:40 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:39.129 15:27:40 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:04:39.129 15:27:40 -- common/autotest_common.sh@855 -- # local i 00:04:39.129 15:27:40 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:39.129 15:27:40 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:39.129 15:27:40 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:04:39.129 15:27:40 -- common/autotest_common.sh@859 -- # break 00:04:39.129 15:27:40 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:39.129 15:27:40 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:39.129 15:27:40 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:39.129 1+0 records in 00:04:39.129 1+0 records out 00:04:39.129 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277537 s, 14.8 MB/s 00:04:39.129 15:27:40 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:39.129 15:27:40 -- common/autotest_common.sh@872 -- # size=4096 00:04:39.129 15:27:40 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:39.129 15:27:40 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:39.129 15:27:40 -- common/autotest_common.sh@875 -- # return 0 00:04:39.129 15:27:40 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:39.129 15:27:40 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:39.129 15:27:40 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:39.388 /dev/nbd1 00:04:39.388 15:27:40 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:39.388 15:27:40 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:39.388 15:27:40 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:04:39.388 15:27:40 -- common/autotest_common.sh@855 -- # local i 00:04:39.388 15:27:40 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:39.388 15:27:40 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:39.388 15:27:40 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:04:39.388 15:27:40 -- common/autotest_common.sh@859 -- # break 00:04:39.388 15:27:40 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:39.388 15:27:40 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:39.388 15:27:40 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:39.388 1+0 records in 00:04:39.388 1+0 records out 00:04:39.388 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214118 s, 19.1 MB/s 00:04:39.388 15:27:40 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:39.388 15:27:40 -- common/autotest_common.sh@872 -- # size=4096 00:04:39.388 15:27:40 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:39.388 15:27:40 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:39.388 15:27:40 -- common/autotest_common.sh@875 -- # return 0 00:04:39.388 15:27:40 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:39.388 15:27:40 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:39.388 15:27:40 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:39.388 15:27:40 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.388 15:27:40 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:39.647 15:27:40 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:39.647 { 00:04:39.647 "nbd_device": "/dev/nbd0", 00:04:39.647 "bdev_name": "Malloc0" 00:04:39.647 }, 00:04:39.647 { 00:04:39.647 "nbd_device": "/dev/nbd1", 00:04:39.647 "bdev_name": "Malloc1" 00:04:39.647 } 00:04:39.647 ]' 00:04:39.647 15:27:41 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:39.647 15:27:41 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:39.647 { 00:04:39.647 "nbd_device": "/dev/nbd0", 00:04:39.647 "bdev_name": "Malloc0" 00:04:39.647 }, 00:04:39.647 { 00:04:39.647 "nbd_device": "/dev/nbd1", 00:04:39.647 "bdev_name": "Malloc1" 00:04:39.647 } 00:04:39.647 ]' 00:04:39.647 15:27:41 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:39.647 /dev/nbd1' 00:04:39.647 15:27:41 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:39.647 /dev/nbd1' 00:04:39.647 15:27:41 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:39.647 15:27:41 -- bdev/nbd_common.sh@65 -- # count=2 00:04:39.647 15:27:41 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:39.647 15:27:41 -- bdev/nbd_common.sh@95 -- # count=2 00:04:39.647 15:27:41 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:39.647 15:27:41 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:39.647 15:27:41 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.647 15:27:41 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:39.647 15:27:41 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:39.647 15:27:41 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:39.647 15:27:41 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:39.647 15:27:41 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:39.647 256+0 records in 00:04:39.647 256+0 records out 00:04:39.647 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00827581 s, 127 MB/s 00:04:39.647 15:27:41 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:39.647 15:27:41 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:39.906 256+0 records in 00:04:39.906 256+0 records out 00:04:39.906 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0225546 s, 46.5 MB/s 00:04:39.906 15:27:41 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:39.906 15:27:41 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:39.906 256+0 records in 00:04:39.906 256+0 records out 00:04:39.906 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0265512 s, 39.5 MB/s 00:04:39.906 15:27:41 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:39.906 15:27:41 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.906 15:27:41 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:39.906 15:27:41 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:39.906 15:27:41 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:39.906 15:27:41 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:39.906 15:27:41 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:39.906 15:27:41 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:39.906 15:27:41 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:39.906 15:27:41 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:39.906 15:27:41 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:39.906 15:27:41 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:39.906 15:27:41 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:39.906 15:27:41 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.906 15:27:41 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.906 15:27:41 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:39.906 15:27:41 -- bdev/nbd_common.sh@51 -- # local i 00:04:39.906 15:27:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:39.906 15:27:41 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:40.165 15:27:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:40.165 15:27:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:40.165 15:27:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:40.165 15:27:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:40.165 15:27:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:40.165 15:27:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:40.165 15:27:41 -- bdev/nbd_common.sh@41 -- # break 00:04:40.165 15:27:41 -- bdev/nbd_common.sh@45 -- # return 0 00:04:40.165 15:27:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:40.165 15:27:41 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:40.434 15:27:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:40.434 15:27:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:40.434 15:27:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:40.434 15:27:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:40.434 15:27:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:40.434 15:27:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:40.434 15:27:41 -- bdev/nbd_common.sh@41 -- # break 00:04:40.434 15:27:41 -- bdev/nbd_common.sh@45 -- # return 0 00:04:40.434 15:27:41 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:40.434 15:27:41 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.434 15:27:41 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:40.693 15:27:41 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:40.693 15:27:41 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:40.693 15:27:41 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:40.693 15:27:42 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:40.693 15:27:42 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:40.693 15:27:42 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:40.693 15:27:42 -- bdev/nbd_common.sh@65 -- # true 00:04:40.693 15:27:42 -- bdev/nbd_common.sh@65 -- # count=0 00:04:40.693 15:27:42 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:40.693 15:27:42 -- bdev/nbd_common.sh@104 -- # count=0 00:04:40.693 15:27:42 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:40.693 15:27:42 -- bdev/nbd_common.sh@109 -- # return 0 00:04:40.693 15:27:42 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:40.963 15:27:42 -- event/event.sh@35 -- # sleep 3 00:04:41.224 [2024-04-17 15:27:42.624947] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:41.490 [2024-04-17 15:27:42.770475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:41.490 [2024-04-17 15:27:42.770486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.490 [2024-04-17 15:27:42.846765] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:41.490 [2024-04-17 15:27:42.846838] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:44.022 spdk_app_start Round 2 00:04:44.022 15:27:45 -- event/event.sh@23 -- # for i in {0..2} 00:04:44.022 15:27:45 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:44.022 15:27:45 -- event/event.sh@25 -- # waitforlisten 59551 /var/tmp/spdk-nbd.sock 00:04:44.022 15:27:45 -- common/autotest_common.sh@817 -- # '[' -z 59551 ']' 00:04:44.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:44.022 15:27:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:44.022 15:27:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:44.022 15:27:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:44.022 15:27:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:44.022 15:27:45 -- common/autotest_common.sh@10 -- # set +x 00:04:44.281 15:27:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:44.281 15:27:45 -- common/autotest_common.sh@850 -- # return 0 00:04:44.281 15:27:45 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:44.540 Malloc0 00:04:44.540 15:27:45 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:44.799 Malloc1 00:04:44.799 15:27:46 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:44.799 15:27:46 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.799 15:27:46 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:44.799 15:27:46 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:44.799 15:27:46 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.799 15:27:46 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:44.799 15:27:46 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:44.799 15:27:46 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.799 15:27:46 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:44.799 15:27:46 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:44.799 15:27:46 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.799 15:27:46 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:44.799 15:27:46 -- bdev/nbd_common.sh@12 -- # local i 00:04:44.799 15:27:46 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:44.799 15:27:46 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:44.799 15:27:46 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:45.057 /dev/nbd0 00:04:45.058 15:27:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:45.058 15:27:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:45.058 15:27:46 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:04:45.058 15:27:46 -- common/autotest_common.sh@855 -- # local i 00:04:45.058 15:27:46 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:45.058 15:27:46 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:45.058 15:27:46 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:04:45.058 15:27:46 -- common/autotest_common.sh@859 -- # break 00:04:45.058 15:27:46 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:45.058 15:27:46 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:45.058 15:27:46 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:45.058 1+0 records in 00:04:45.058 1+0 records out 00:04:45.058 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028002 s, 14.6 MB/s 00:04:45.058 15:27:46 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:45.058 15:27:46 -- common/autotest_common.sh@872 -- # size=4096 00:04:45.058 15:27:46 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:45.058 15:27:46 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:45.058 15:27:46 -- common/autotest_common.sh@875 -- # return 0 00:04:45.058 15:27:46 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:45.058 15:27:46 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:45.058 15:27:46 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:45.316 /dev/nbd1 00:04:45.316 15:27:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:45.316 15:27:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:45.316 15:27:46 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:04:45.316 15:27:46 -- common/autotest_common.sh@855 -- # local i 00:04:45.316 15:27:46 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:45.316 15:27:46 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:45.316 15:27:46 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:04:45.316 15:27:46 -- common/autotest_common.sh@859 -- # break 00:04:45.316 15:27:46 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:45.316 15:27:46 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:45.316 15:27:46 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:45.316 1+0 records in 00:04:45.316 1+0 records out 00:04:45.316 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236041 s, 17.4 MB/s 00:04:45.316 15:27:46 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:45.316 15:27:46 -- common/autotest_common.sh@872 -- # size=4096 00:04:45.316 15:27:46 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:45.316 15:27:46 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:45.316 15:27:46 -- common/autotest_common.sh@875 -- # return 0 00:04:45.316 15:27:46 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:45.316 15:27:46 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:45.316 15:27:46 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:45.316 15:27:46 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.316 15:27:46 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:45.575 15:27:46 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:45.575 { 00:04:45.575 "nbd_device": "/dev/nbd0", 00:04:45.575 "bdev_name": "Malloc0" 00:04:45.575 }, 00:04:45.575 { 00:04:45.575 "nbd_device": "/dev/nbd1", 00:04:45.575 "bdev_name": "Malloc1" 00:04:45.575 } 00:04:45.575 ]' 00:04:45.575 15:27:46 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:45.575 { 00:04:45.575 "nbd_device": "/dev/nbd0", 00:04:45.575 "bdev_name": "Malloc0" 00:04:45.575 }, 00:04:45.575 { 00:04:45.575 "nbd_device": "/dev/nbd1", 00:04:45.575 "bdev_name": "Malloc1" 00:04:45.575 } 00:04:45.575 ]' 00:04:45.575 15:27:46 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:45.575 15:27:46 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:45.575 /dev/nbd1' 00:04:45.575 15:27:46 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:45.575 /dev/nbd1' 00:04:45.575 15:27:46 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:45.575 15:27:46 -- bdev/nbd_common.sh@65 -- # count=2 00:04:45.575 15:27:46 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:45.575 15:27:46 -- bdev/nbd_common.sh@95 -- # count=2 00:04:45.575 15:27:46 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:45.575 15:27:46 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:45.575 15:27:46 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.575 15:27:46 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:45.575 15:27:46 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:45.575 15:27:46 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:45.575 15:27:46 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:45.575 15:27:46 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:45.575 256+0 records in 00:04:45.575 256+0 records out 00:04:45.575 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00460647 s, 228 MB/s 00:04:45.575 15:27:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:45.575 15:27:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:45.575 256+0 records in 00:04:45.575 256+0 records out 00:04:45.575 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.020868 s, 50.2 MB/s 00:04:45.575 15:27:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:45.575 15:27:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:45.575 256+0 records in 00:04:45.575 256+0 records out 00:04:45.575 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024244 s, 43.3 MB/s 00:04:45.575 15:27:47 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:45.575 15:27:47 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.575 15:27:47 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:45.575 15:27:47 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:45.833 15:27:47 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:45.833 15:27:47 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:45.833 15:27:47 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:45.833 15:27:47 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:45.833 15:27:47 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:45.833 15:27:47 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:45.833 15:27:47 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:45.833 15:27:47 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:45.833 15:27:47 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:45.833 15:27:47 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.833 15:27:47 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.833 15:27:47 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:45.833 15:27:47 -- bdev/nbd_common.sh@51 -- # local i 00:04:45.833 15:27:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:45.833 15:27:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:46.091 15:27:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:46.091 15:27:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:46.091 15:27:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:46.091 15:27:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:46.091 15:27:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:46.091 15:27:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:46.091 15:27:47 -- bdev/nbd_common.sh@41 -- # break 00:04:46.091 15:27:47 -- bdev/nbd_common.sh@45 -- # return 0 00:04:46.091 15:27:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:46.091 15:27:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:46.349 15:27:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:46.349 15:27:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:46.349 15:27:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:46.349 15:27:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:46.349 15:27:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:46.349 15:27:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:46.349 15:27:47 -- bdev/nbd_common.sh@41 -- # break 00:04:46.349 15:27:47 -- bdev/nbd_common.sh@45 -- # return 0 00:04:46.349 15:27:47 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:46.349 15:27:47 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.349 15:27:47 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:46.607 15:27:47 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:46.607 15:27:47 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:46.607 15:27:47 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:46.607 15:27:47 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:46.607 15:27:47 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:46.607 15:27:47 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:46.607 15:27:47 -- bdev/nbd_common.sh@65 -- # true 00:04:46.607 15:27:47 -- bdev/nbd_common.sh@65 -- # count=0 00:04:46.607 15:27:47 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:46.607 15:27:47 -- bdev/nbd_common.sh@104 -- # count=0 00:04:46.607 15:27:47 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:46.607 15:27:47 -- bdev/nbd_common.sh@109 -- # return 0 00:04:46.607 15:27:47 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:46.865 15:27:48 -- event/event.sh@35 -- # sleep 3 00:04:47.122 [2024-04-17 15:27:48.507804] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:47.381 [2024-04-17 15:27:48.651297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:47.381 [2024-04-17 15:27:48.651308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.381 [2024-04-17 15:27:48.727265] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:47.381 [2024-04-17 15:27:48.727345] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:49.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:49.914 15:27:51 -- event/event.sh@38 -- # waitforlisten 59551 /var/tmp/spdk-nbd.sock 00:04:49.914 15:27:51 -- common/autotest_common.sh@817 -- # '[' -z 59551 ']' 00:04:49.914 15:27:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:49.914 15:27:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:49.914 15:27:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:49.914 15:27:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:49.914 15:27:51 -- common/autotest_common.sh@10 -- # set +x 00:04:50.172 15:27:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:50.172 15:27:51 -- common/autotest_common.sh@850 -- # return 0 00:04:50.172 15:27:51 -- event/event.sh@39 -- # killprocess 59551 00:04:50.172 15:27:51 -- common/autotest_common.sh@936 -- # '[' -z 59551 ']' 00:04:50.173 15:27:51 -- common/autotest_common.sh@940 -- # kill -0 59551 00:04:50.173 15:27:51 -- common/autotest_common.sh@941 -- # uname 00:04:50.173 15:27:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:50.173 15:27:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59551 00:04:50.173 killing process with pid 59551 00:04:50.173 15:27:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:50.173 15:27:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:50.173 15:27:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59551' 00:04:50.173 15:27:51 -- common/autotest_common.sh@955 -- # kill 59551 00:04:50.173 15:27:51 -- common/autotest_common.sh@960 -- # wait 59551 00:04:50.431 spdk_app_start is called in Round 0. 00:04:50.431 Shutdown signal received, stop current app iteration 00:04:50.431 Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 reinitialization... 00:04:50.431 spdk_app_start is called in Round 1. 00:04:50.431 Shutdown signal received, stop current app iteration 00:04:50.431 Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 reinitialization... 00:04:50.431 spdk_app_start is called in Round 2. 00:04:50.431 Shutdown signal received, stop current app iteration 00:04:50.431 Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 reinitialization... 00:04:50.431 spdk_app_start is called in Round 3. 00:04:50.431 Shutdown signal received, stop current app iteration 00:04:50.431 15:27:51 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:50.431 15:27:51 -- event/event.sh@42 -- # return 0 00:04:50.431 00:04:50.431 real 0m19.355s 00:04:50.431 user 0m42.560s 00:04:50.431 sys 0m3.180s 00:04:50.431 15:27:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:50.431 15:27:51 -- common/autotest_common.sh@10 -- # set +x 00:04:50.431 ************************************ 00:04:50.431 END TEST app_repeat 00:04:50.431 ************************************ 00:04:50.431 15:27:51 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:50.431 15:27:51 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:50.431 15:27:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:50.431 15:27:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:50.431 15:27:51 -- common/autotest_common.sh@10 -- # set +x 00:04:50.689 ************************************ 00:04:50.689 START TEST cpu_locks 00:04:50.689 ************************************ 00:04:50.689 15:27:51 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:50.689 * Looking for test storage... 00:04:50.689 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:50.689 15:27:52 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:50.689 15:27:52 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:50.689 15:27:52 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:50.689 15:27:52 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:50.689 15:27:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:50.689 15:27:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:50.689 15:27:52 -- common/autotest_common.sh@10 -- # set +x 00:04:50.689 ************************************ 00:04:50.689 START TEST default_locks 00:04:50.689 ************************************ 00:04:50.689 15:27:52 -- common/autotest_common.sh@1111 -- # default_locks 00:04:50.689 15:27:52 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:50.689 15:27:52 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60006 00:04:50.689 15:27:52 -- event/cpu_locks.sh@47 -- # waitforlisten 60006 00:04:50.689 15:27:52 -- common/autotest_common.sh@817 -- # '[' -z 60006 ']' 00:04:50.689 15:27:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.689 15:27:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:50.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.689 15:27:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.689 15:27:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:50.689 15:27:52 -- common/autotest_common.sh@10 -- # set +x 00:04:50.960 [2024-04-17 15:27:52.177917] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:04:50.960 [2024-04-17 15:27:52.178132] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60006 ] 00:04:50.960 [2024-04-17 15:27:52.320472] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.218 [2024-04-17 15:27:52.459923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.785 15:27:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:51.785 15:27:53 -- common/autotest_common.sh@850 -- # return 0 00:04:51.785 15:27:53 -- event/cpu_locks.sh@49 -- # locks_exist 60006 00:04:51.785 15:27:53 -- event/cpu_locks.sh@22 -- # lslocks -p 60006 00:04:51.785 15:27:53 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:52.353 15:27:53 -- event/cpu_locks.sh@50 -- # killprocess 60006 00:04:52.353 15:27:53 -- common/autotest_common.sh@936 -- # '[' -z 60006 ']' 00:04:52.353 15:27:53 -- common/autotest_common.sh@940 -- # kill -0 60006 00:04:52.353 15:27:53 -- common/autotest_common.sh@941 -- # uname 00:04:52.353 15:27:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:52.353 15:27:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60006 00:04:52.353 15:27:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:52.353 15:27:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:52.353 killing process with pid 60006 00:04:52.353 15:27:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60006' 00:04:52.353 15:27:53 -- common/autotest_common.sh@955 -- # kill 60006 00:04:52.353 15:27:53 -- common/autotest_common.sh@960 -- # wait 60006 00:04:52.919 15:27:54 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60006 00:04:52.919 15:27:54 -- common/autotest_common.sh@638 -- # local es=0 00:04:52.919 15:27:54 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 60006 00:04:52.919 15:27:54 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:04:52.919 15:27:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:52.920 15:27:54 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:04:52.920 15:27:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:52.920 15:27:54 -- common/autotest_common.sh@641 -- # waitforlisten 60006 00:04:52.920 15:27:54 -- common/autotest_common.sh@817 -- # '[' -z 60006 ']' 00:04:53.196 15:27:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.197 15:27:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:53.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.197 15:27:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.197 15:27:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:53.197 15:27:54 -- common/autotest_common.sh@10 -- # set +x 00:04:53.197 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (60006) - No such process 00:04:53.197 ERROR: process (pid: 60006) is no longer running 00:04:53.197 15:27:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:53.197 15:27:54 -- common/autotest_common.sh@850 -- # return 1 00:04:53.197 15:27:54 -- common/autotest_common.sh@641 -- # es=1 00:04:53.197 15:27:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:53.197 15:27:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:53.197 15:27:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:53.197 15:27:54 -- event/cpu_locks.sh@54 -- # no_locks 00:04:53.197 15:27:54 -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:53.197 15:27:54 -- event/cpu_locks.sh@26 -- # local lock_files 00:04:53.197 15:27:54 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:53.197 00:04:53.197 real 0m2.276s 00:04:53.197 user 0m2.334s 00:04:53.197 sys 0m0.712s 00:04:53.197 15:27:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:53.197 ************************************ 00:04:53.197 END TEST default_locks 00:04:53.197 15:27:54 -- common/autotest_common.sh@10 -- # set +x 00:04:53.197 ************************************ 00:04:53.197 15:27:54 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:53.197 15:27:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:53.197 15:27:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:53.197 15:27:54 -- common/autotest_common.sh@10 -- # set +x 00:04:53.197 ************************************ 00:04:53.197 START TEST default_locks_via_rpc 00:04:53.197 ************************************ 00:04:53.197 15:27:54 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:04:53.197 15:27:54 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60062 00:04:53.197 15:27:54 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:53.197 15:27:54 -- event/cpu_locks.sh@63 -- # waitforlisten 60062 00:04:53.197 15:27:54 -- common/autotest_common.sh@817 -- # '[' -z 60062 ']' 00:04:53.197 15:27:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.197 15:27:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:53.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.197 15:27:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.197 15:27:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:53.197 15:27:54 -- common/autotest_common.sh@10 -- # set +x 00:04:53.197 [2024-04-17 15:27:54.555739] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:04:53.197 [2024-04-17 15:27:54.555874] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60062 ] 00:04:53.457 [2024-04-17 15:27:54.692923] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.457 [2024-04-17 15:27:54.852673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.394 15:27:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:54.394 15:27:55 -- common/autotest_common.sh@850 -- # return 0 00:04:54.394 15:27:55 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:54.394 15:27:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:54.394 15:27:55 -- common/autotest_common.sh@10 -- # set +x 00:04:54.394 15:27:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:54.394 15:27:55 -- event/cpu_locks.sh@67 -- # no_locks 00:04:54.394 15:27:55 -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:54.394 15:27:55 -- event/cpu_locks.sh@26 -- # local lock_files 00:04:54.394 15:27:55 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:54.394 15:27:55 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:54.394 15:27:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:54.394 15:27:55 -- common/autotest_common.sh@10 -- # set +x 00:04:54.394 15:27:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:54.394 15:27:55 -- event/cpu_locks.sh@71 -- # locks_exist 60062 00:04:54.394 15:27:55 -- event/cpu_locks.sh@22 -- # lslocks -p 60062 00:04:54.394 15:27:55 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:54.651 15:27:55 -- event/cpu_locks.sh@73 -- # killprocess 60062 00:04:54.651 15:27:55 -- common/autotest_common.sh@936 -- # '[' -z 60062 ']' 00:04:54.651 15:27:55 -- common/autotest_common.sh@940 -- # kill -0 60062 00:04:54.651 15:27:55 -- common/autotest_common.sh@941 -- # uname 00:04:54.651 15:27:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:54.651 15:27:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60062 00:04:54.651 15:27:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:54.651 15:27:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:54.651 killing process with pid 60062 00:04:54.651 15:27:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60062' 00:04:54.651 15:27:55 -- common/autotest_common.sh@955 -- # kill 60062 00:04:54.651 15:27:55 -- common/autotest_common.sh@960 -- # wait 60062 00:04:55.253 00:04:55.253 real 0m2.087s 00:04:55.253 user 0m2.118s 00:04:55.253 sys 0m0.644s 00:04:55.253 15:27:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:55.253 15:27:56 -- common/autotest_common.sh@10 -- # set +x 00:04:55.253 ************************************ 00:04:55.253 END TEST default_locks_via_rpc 00:04:55.253 ************************************ 00:04:55.253 15:27:56 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:55.253 15:27:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:55.253 15:27:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:55.253 15:27:56 -- common/autotest_common.sh@10 -- # set +x 00:04:55.531 ************************************ 00:04:55.532 START TEST non_locking_app_on_locked_coremask 00:04:55.532 ************************************ 00:04:55.532 15:27:56 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:04:55.532 15:27:56 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60117 00:04:55.532 15:27:56 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:55.532 15:27:56 -- event/cpu_locks.sh@81 -- # waitforlisten 60117 /var/tmp/spdk.sock 00:04:55.532 15:27:56 -- common/autotest_common.sh@817 -- # '[' -z 60117 ']' 00:04:55.532 15:27:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.532 15:27:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:55.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.532 15:27:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.532 15:27:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:55.532 15:27:56 -- common/autotest_common.sh@10 -- # set +x 00:04:55.532 [2024-04-17 15:27:56.765390] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:04:55.532 [2024-04-17 15:27:56.765500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60117 ] 00:04:55.532 [2024-04-17 15:27:56.899842] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.790 [2024-04-17 15:27:57.055515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.356 15:27:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:56.356 15:27:57 -- common/autotest_common.sh@850 -- # return 0 00:04:56.356 15:27:57 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:56.356 15:27:57 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60133 00:04:56.356 15:27:57 -- event/cpu_locks.sh@85 -- # waitforlisten 60133 /var/tmp/spdk2.sock 00:04:56.356 15:27:57 -- common/autotest_common.sh@817 -- # '[' -z 60133 ']' 00:04:56.356 15:27:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:56.356 15:27:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:56.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:56.356 15:27:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:56.356 15:27:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:56.356 15:27:57 -- common/autotest_common.sh@10 -- # set +x 00:04:56.614 [2024-04-17 15:27:57.840462] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:04:56.614 [2024-04-17 15:27:57.840566] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60133 ] 00:04:56.614 [2024-04-17 15:27:57.981097] app.c: 818:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:56.614 [2024-04-17 15:27:57.981159] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.872 [2024-04-17 15:27:58.285210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.808 15:27:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:57.808 15:27:58 -- common/autotest_common.sh@850 -- # return 0 00:04:57.808 15:27:58 -- event/cpu_locks.sh@87 -- # locks_exist 60117 00:04:57.808 15:27:58 -- event/cpu_locks.sh@22 -- # lslocks -p 60117 00:04:57.808 15:27:58 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:58.373 15:27:59 -- event/cpu_locks.sh@89 -- # killprocess 60117 00:04:58.373 15:27:59 -- common/autotest_common.sh@936 -- # '[' -z 60117 ']' 00:04:58.374 15:27:59 -- common/autotest_common.sh@940 -- # kill -0 60117 00:04:58.374 15:27:59 -- common/autotest_common.sh@941 -- # uname 00:04:58.374 15:27:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:58.374 15:27:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60117 00:04:58.631 15:27:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:58.631 15:27:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:58.631 killing process with pid 60117 00:04:58.631 15:27:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60117' 00:04:58.631 15:27:59 -- common/autotest_common.sh@955 -- # kill 60117 00:04:58.631 15:27:59 -- common/autotest_common.sh@960 -- # wait 60117 00:05:00.004 15:28:01 -- event/cpu_locks.sh@90 -- # killprocess 60133 00:05:00.004 15:28:01 -- common/autotest_common.sh@936 -- # '[' -z 60133 ']' 00:05:00.004 15:28:01 -- common/autotest_common.sh@940 -- # kill -0 60133 00:05:00.004 15:28:01 -- common/autotest_common.sh@941 -- # uname 00:05:00.004 15:28:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:00.004 15:28:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60133 00:05:00.004 15:28:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:00.004 15:28:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:00.004 killing process with pid 60133 00:05:00.004 15:28:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60133' 00:05:00.004 15:28:01 -- common/autotest_common.sh@955 -- # kill 60133 00:05:00.004 15:28:01 -- common/autotest_common.sh@960 -- # wait 60133 00:05:00.570 00:05:00.570 real 0m5.038s 00:05:00.570 user 0m5.262s 00:05:00.570 sys 0m1.368s 00:05:00.570 15:28:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:00.570 15:28:01 -- common/autotest_common.sh@10 -- # set +x 00:05:00.570 ************************************ 00:05:00.570 END TEST non_locking_app_on_locked_coremask 00:05:00.570 ************************************ 00:05:00.570 15:28:01 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:00.570 15:28:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:00.570 15:28:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:00.570 15:28:01 -- common/autotest_common.sh@10 -- # set +x 00:05:00.570 ************************************ 00:05:00.570 START TEST locking_app_on_unlocked_coremask 00:05:00.570 ************************************ 00:05:00.570 15:28:01 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:05:00.570 15:28:01 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60215 00:05:00.570 15:28:01 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:00.570 15:28:01 -- event/cpu_locks.sh@99 -- # waitforlisten 60215 /var/tmp/spdk.sock 00:05:00.570 15:28:01 -- common/autotest_common.sh@817 -- # '[' -z 60215 ']' 00:05:00.570 15:28:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.570 15:28:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:00.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.570 15:28:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.570 15:28:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:00.570 15:28:01 -- common/autotest_common.sh@10 -- # set +x 00:05:00.570 [2024-04-17 15:28:01.940570] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:05:00.570 [2024-04-17 15:28:01.940726] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60215 ] 00:05:00.828 [2024-04-17 15:28:02.080453] app.c: 818:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:00.828 [2024-04-17 15:28:02.080515] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.828 [2024-04-17 15:28:02.237316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.763 15:28:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:01.763 15:28:02 -- common/autotest_common.sh@850 -- # return 0 00:05:01.763 15:28:02 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:01.763 15:28:02 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60231 00:05:01.763 15:28:02 -- event/cpu_locks.sh@103 -- # waitforlisten 60231 /var/tmp/spdk2.sock 00:05:01.763 15:28:02 -- common/autotest_common.sh@817 -- # '[' -z 60231 ']' 00:05:01.763 15:28:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:01.763 15:28:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:01.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:01.763 15:28:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:01.763 15:28:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:01.763 15:28:02 -- common/autotest_common.sh@10 -- # set +x 00:05:01.763 [2024-04-17 15:28:02.939320] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:05:01.763 [2024-04-17 15:28:02.939457] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60231 ] 00:05:01.763 [2024-04-17 15:28:03.083696] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.020 [2024-04-17 15:28:03.399177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.953 15:28:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:02.953 15:28:04 -- common/autotest_common.sh@850 -- # return 0 00:05:02.953 15:28:04 -- event/cpu_locks.sh@105 -- # locks_exist 60231 00:05:02.953 15:28:04 -- event/cpu_locks.sh@22 -- # lslocks -p 60231 00:05:02.953 15:28:04 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:03.519 15:28:04 -- event/cpu_locks.sh@107 -- # killprocess 60215 00:05:03.519 15:28:04 -- common/autotest_common.sh@936 -- # '[' -z 60215 ']' 00:05:03.519 15:28:04 -- common/autotest_common.sh@940 -- # kill -0 60215 00:05:03.519 15:28:04 -- common/autotest_common.sh@941 -- # uname 00:05:03.519 15:28:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:03.519 15:28:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60215 00:05:03.519 15:28:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:03.519 15:28:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:03.519 killing process with pid 60215 00:05:03.519 15:28:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60215' 00:05:03.519 15:28:04 -- common/autotest_common.sh@955 -- # kill 60215 00:05:03.519 15:28:04 -- common/autotest_common.sh@960 -- # wait 60215 00:05:04.893 15:28:06 -- event/cpu_locks.sh@108 -- # killprocess 60231 00:05:04.893 15:28:06 -- common/autotest_common.sh@936 -- # '[' -z 60231 ']' 00:05:04.893 15:28:06 -- common/autotest_common.sh@940 -- # kill -0 60231 00:05:04.893 15:28:06 -- common/autotest_common.sh@941 -- # uname 00:05:04.893 15:28:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:04.893 15:28:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60231 00:05:04.893 15:28:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:04.893 15:28:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:04.893 killing process with pid 60231 00:05:04.893 15:28:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60231' 00:05:04.893 15:28:06 -- common/autotest_common.sh@955 -- # kill 60231 00:05:04.893 15:28:06 -- common/autotest_common.sh@960 -- # wait 60231 00:05:05.460 00:05:05.460 real 0m4.925s 00:05:05.460 user 0m5.143s 00:05:05.460 sys 0m1.298s 00:05:05.460 15:28:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:05.460 15:28:06 -- common/autotest_common.sh@10 -- # set +x 00:05:05.460 ************************************ 00:05:05.460 END TEST locking_app_on_unlocked_coremask 00:05:05.460 ************************************ 00:05:05.460 15:28:06 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:05.460 15:28:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:05.460 15:28:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.460 15:28:06 -- common/autotest_common.sh@10 -- # set +x 00:05:05.718 ************************************ 00:05:05.718 START TEST locking_app_on_locked_coremask 00:05:05.718 ************************************ 00:05:05.718 15:28:06 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:05:05.718 15:28:06 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60313 00:05:05.718 15:28:06 -- event/cpu_locks.sh@116 -- # waitforlisten 60313 /var/tmp/spdk.sock 00:05:05.718 15:28:06 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:05.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.718 15:28:06 -- common/autotest_common.sh@817 -- # '[' -z 60313 ']' 00:05:05.718 15:28:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.718 15:28:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:05.718 15:28:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.718 15:28:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:05.718 15:28:06 -- common/autotest_common.sh@10 -- # set +x 00:05:05.718 [2024-04-17 15:28:07.020028] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:05:05.718 [2024-04-17 15:28:07.020535] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60313 ] 00:05:05.977 [2024-04-17 15:28:07.164083] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.977 [2024-04-17 15:28:07.326252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.544 15:28:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:06.544 15:28:07 -- common/autotest_common.sh@850 -- # return 0 00:05:06.544 15:28:07 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:06.544 15:28:07 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60329 00:05:06.544 15:28:07 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60329 /var/tmp/spdk2.sock 00:05:06.544 15:28:07 -- common/autotest_common.sh@638 -- # local es=0 00:05:06.544 15:28:07 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 60329 /var/tmp/spdk2.sock 00:05:06.544 15:28:07 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:06.544 15:28:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:06.544 15:28:07 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:06.544 15:28:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:06.544 15:28:07 -- common/autotest_common.sh@641 -- # waitforlisten 60329 /var/tmp/spdk2.sock 00:05:06.544 15:28:07 -- common/autotest_common.sh@817 -- # '[' -z 60329 ']' 00:05:06.544 15:28:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:06.544 15:28:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:06.544 15:28:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:06.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:06.544 15:28:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:06.544 15:28:07 -- common/autotest_common.sh@10 -- # set +x 00:05:06.803 [2024-04-17 15:28:08.018368] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:05:06.803 [2024-04-17 15:28:08.018803] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60329 ] 00:05:06.803 [2024-04-17 15:28:08.162680] app.c: 688:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60313 has claimed it. 00:05:06.803 [2024-04-17 15:28:08.162774] app.c: 814:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:07.369 ERROR: process (pid: 60329) is no longer running 00:05:07.369 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (60329) - No such process 00:05:07.369 15:28:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:07.369 15:28:08 -- common/autotest_common.sh@850 -- # return 1 00:05:07.369 15:28:08 -- common/autotest_common.sh@641 -- # es=1 00:05:07.369 15:28:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:07.369 15:28:08 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:07.369 15:28:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:07.369 15:28:08 -- event/cpu_locks.sh@122 -- # locks_exist 60313 00:05:07.369 15:28:08 -- event/cpu_locks.sh@22 -- # lslocks -p 60313 00:05:07.369 15:28:08 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:07.936 15:28:09 -- event/cpu_locks.sh@124 -- # killprocess 60313 00:05:07.936 15:28:09 -- common/autotest_common.sh@936 -- # '[' -z 60313 ']' 00:05:07.936 15:28:09 -- common/autotest_common.sh@940 -- # kill -0 60313 00:05:07.936 15:28:09 -- common/autotest_common.sh@941 -- # uname 00:05:07.936 15:28:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:07.936 15:28:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60313 00:05:07.936 killing process with pid 60313 00:05:07.936 15:28:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:07.936 15:28:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:07.936 15:28:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60313' 00:05:07.936 15:28:09 -- common/autotest_common.sh@955 -- # kill 60313 00:05:07.936 15:28:09 -- common/autotest_common.sh@960 -- # wait 60313 00:05:08.503 ************************************ 00:05:08.503 END TEST locking_app_on_locked_coremask 00:05:08.503 ************************************ 00:05:08.503 00:05:08.503 real 0m2.826s 00:05:08.503 user 0m3.110s 00:05:08.503 sys 0m0.720s 00:05:08.503 15:28:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:08.503 15:28:09 -- common/autotest_common.sh@10 -- # set +x 00:05:08.503 15:28:09 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:08.503 15:28:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:08.503 15:28:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:08.503 15:28:09 -- common/autotest_common.sh@10 -- # set +x 00:05:08.503 ************************************ 00:05:08.503 START TEST locking_overlapped_coremask 00:05:08.503 ************************************ 00:05:08.503 15:28:09 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:05:08.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.503 15:28:09 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60384 00:05:08.503 15:28:09 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:08.503 15:28:09 -- event/cpu_locks.sh@133 -- # waitforlisten 60384 /var/tmp/spdk.sock 00:05:08.503 15:28:09 -- common/autotest_common.sh@817 -- # '[' -z 60384 ']' 00:05:08.503 15:28:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.503 15:28:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:08.503 15:28:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.503 15:28:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:08.503 15:28:09 -- common/autotest_common.sh@10 -- # set +x 00:05:08.503 [2024-04-17 15:28:09.942311] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:05:08.503 [2024-04-17 15:28:09.942706] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60384 ] 00:05:08.762 [2024-04-17 15:28:10.082977] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:09.020 [2024-04-17 15:28:10.246553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.020 [2024-04-17 15:28:10.246768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.020 [2024-04-17 15:28:10.246781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:09.586 15:28:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:09.586 15:28:10 -- common/autotest_common.sh@850 -- # return 0 00:05:09.586 15:28:10 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60402 00:05:09.586 15:28:10 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:09.586 15:28:10 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60402 /var/tmp/spdk2.sock 00:05:09.586 15:28:10 -- common/autotest_common.sh@638 -- # local es=0 00:05:09.586 15:28:10 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 60402 /var/tmp/spdk2.sock 00:05:09.586 15:28:10 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:09.586 15:28:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:09.586 15:28:10 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:09.586 15:28:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:09.586 15:28:10 -- common/autotest_common.sh@641 -- # waitforlisten 60402 /var/tmp/spdk2.sock 00:05:09.586 15:28:10 -- common/autotest_common.sh@817 -- # '[' -z 60402 ']' 00:05:09.586 15:28:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:09.586 15:28:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:09.586 15:28:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:09.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:09.586 15:28:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:09.586 15:28:10 -- common/autotest_common.sh@10 -- # set +x 00:05:09.586 [2024-04-17 15:28:11.013961] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:05:09.586 [2024-04-17 15:28:11.014739] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60402 ] 00:05:09.845 [2024-04-17 15:28:11.161793] app.c: 688:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60384 has claimed it. 00:05:09.845 [2024-04-17 15:28:11.161890] app.c: 814:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:10.412 ERROR: process (pid: 60402) is no longer running 00:05:10.412 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (60402) - No such process 00:05:10.412 15:28:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:10.412 15:28:11 -- common/autotest_common.sh@850 -- # return 1 00:05:10.412 15:28:11 -- common/autotest_common.sh@641 -- # es=1 00:05:10.412 15:28:11 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:10.412 15:28:11 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:10.412 15:28:11 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:10.412 15:28:11 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:10.412 15:28:11 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:10.412 15:28:11 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:10.412 15:28:11 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:10.412 15:28:11 -- event/cpu_locks.sh@141 -- # killprocess 60384 00:05:10.412 15:28:11 -- common/autotest_common.sh@936 -- # '[' -z 60384 ']' 00:05:10.412 15:28:11 -- common/autotest_common.sh@940 -- # kill -0 60384 00:05:10.412 15:28:11 -- common/autotest_common.sh@941 -- # uname 00:05:10.412 15:28:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:10.412 15:28:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60384 00:05:10.412 15:28:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:10.412 15:28:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:10.412 15:28:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60384' 00:05:10.412 killing process with pid 60384 00:05:10.412 15:28:11 -- common/autotest_common.sh@955 -- # kill 60384 00:05:10.412 15:28:11 -- common/autotest_common.sh@960 -- # wait 60384 00:05:11.012 00:05:11.012 real 0m2.527s 00:05:11.012 user 0m6.696s 00:05:11.012 sys 0m0.591s 00:05:11.012 ************************************ 00:05:11.012 END TEST locking_overlapped_coremask 00:05:11.012 ************************************ 00:05:11.012 15:28:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:11.012 15:28:12 -- common/autotest_common.sh@10 -- # set +x 00:05:11.012 15:28:12 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:11.012 15:28:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:11.012 15:28:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.012 15:28:12 -- common/autotest_common.sh@10 -- # set +x 00:05:11.271 ************************************ 00:05:11.271 START TEST locking_overlapped_coremask_via_rpc 00:05:11.271 ************************************ 00:05:11.271 15:28:12 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:05:11.271 15:28:12 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60452 00:05:11.271 15:28:12 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:11.271 15:28:12 -- event/cpu_locks.sh@149 -- # waitforlisten 60452 /var/tmp/spdk.sock 00:05:11.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.271 15:28:12 -- common/autotest_common.sh@817 -- # '[' -z 60452 ']' 00:05:11.271 15:28:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.271 15:28:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:11.271 15:28:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.271 15:28:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:11.271 15:28:12 -- common/autotest_common.sh@10 -- # set +x 00:05:11.271 [2024-04-17 15:28:12.571872] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:05:11.271 [2024-04-17 15:28:12.572037] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60452 ] 00:05:11.271 [2024-04-17 15:28:12.706013] app.c: 818:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:11.271 [2024-04-17 15:28:12.706063] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:11.529 [2024-04-17 15:28:12.878312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.529 [2024-04-17 15:28:12.878416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:11.529 [2024-04-17 15:28:12.878418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.462 15:28:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:12.462 15:28:13 -- common/autotest_common.sh@850 -- # return 0 00:05:12.462 15:28:13 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60470 00:05:12.462 15:28:13 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:12.462 15:28:13 -- event/cpu_locks.sh@153 -- # waitforlisten 60470 /var/tmp/spdk2.sock 00:05:12.462 15:28:13 -- common/autotest_common.sh@817 -- # '[' -z 60470 ']' 00:05:12.462 15:28:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:12.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:12.462 15:28:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:12.462 15:28:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:12.462 15:28:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:12.462 15:28:13 -- common/autotest_common.sh@10 -- # set +x 00:05:12.462 [2024-04-17 15:28:13.637303] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:05:12.463 [2024-04-17 15:28:13.637464] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60470 ] 00:05:12.463 [2024-04-17 15:28:13.780117] app.c: 818:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:12.463 [2024-04-17 15:28:13.780191] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:12.720 [2024-04-17 15:28:14.093785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:12.720 [2024-04-17 15:28:14.097020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:12.720 [2024-04-17 15:28:14.097021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:13.654 15:28:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:13.654 15:28:14 -- common/autotest_common.sh@850 -- # return 0 00:05:13.654 15:28:14 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:13.654 15:28:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:13.654 15:28:14 -- common/autotest_common.sh@10 -- # set +x 00:05:13.654 15:28:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:13.654 15:28:14 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:13.654 15:28:14 -- common/autotest_common.sh@638 -- # local es=0 00:05:13.654 15:28:14 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:13.654 15:28:14 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:05:13.654 15:28:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:13.654 15:28:14 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:05:13.654 15:28:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:13.654 15:28:14 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:13.654 15:28:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:13.654 15:28:14 -- common/autotest_common.sh@10 -- # set +x 00:05:13.654 [2024-04-17 15:28:14.759882] app.c: 688:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60452 has claimed it. 00:05:13.654 request: 00:05:13.654 { 00:05:13.654 "method": "framework_enable_cpumask_locks", 00:05:13.654 "req_id": 1 00:05:13.654 } 00:05:13.654 Got JSON-RPC error response 00:05:13.654 response: 00:05:13.654 { 00:05:13.654 "code": -32603, 00:05:13.654 "message": "Failed to claim CPU core: 2" 00:05:13.654 } 00:05:13.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.654 15:28:14 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:05:13.654 15:28:14 -- common/autotest_common.sh@641 -- # es=1 00:05:13.654 15:28:14 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:13.654 15:28:14 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:13.654 15:28:14 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:13.654 15:28:14 -- event/cpu_locks.sh@158 -- # waitforlisten 60452 /var/tmp/spdk.sock 00:05:13.654 15:28:14 -- common/autotest_common.sh@817 -- # '[' -z 60452 ']' 00:05:13.654 15:28:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.654 15:28:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:13.654 15:28:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.654 15:28:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:13.654 15:28:14 -- common/autotest_common.sh@10 -- # set +x 00:05:13.654 15:28:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:13.654 15:28:15 -- common/autotest_common.sh@850 -- # return 0 00:05:13.654 15:28:15 -- event/cpu_locks.sh@159 -- # waitforlisten 60470 /var/tmp/spdk2.sock 00:05:13.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:13.654 15:28:15 -- common/autotest_common.sh@817 -- # '[' -z 60470 ']' 00:05:13.654 15:28:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:13.654 15:28:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:13.654 15:28:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:13.654 15:28:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:13.654 15:28:15 -- common/autotest_common.sh@10 -- # set +x 00:05:13.913 ************************************ 00:05:13.913 END TEST locking_overlapped_coremask_via_rpc 00:05:13.913 ************************************ 00:05:13.913 15:28:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:13.913 15:28:15 -- common/autotest_common.sh@850 -- # return 0 00:05:13.913 15:28:15 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:13.913 15:28:15 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:13.913 15:28:15 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:13.913 15:28:15 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:13.913 00:05:13.913 real 0m2.735s 00:05:13.913 user 0m1.374s 00:05:13.913 sys 0m0.214s 00:05:13.913 15:28:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:13.913 15:28:15 -- common/autotest_common.sh@10 -- # set +x 00:05:13.913 15:28:15 -- event/cpu_locks.sh@174 -- # cleanup 00:05:13.913 15:28:15 -- event/cpu_locks.sh@15 -- # [[ -z 60452 ]] 00:05:13.913 15:28:15 -- event/cpu_locks.sh@15 -- # killprocess 60452 00:05:13.913 15:28:15 -- common/autotest_common.sh@936 -- # '[' -z 60452 ']' 00:05:13.913 15:28:15 -- common/autotest_common.sh@940 -- # kill -0 60452 00:05:13.913 15:28:15 -- common/autotest_common.sh@941 -- # uname 00:05:13.914 15:28:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:13.914 15:28:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60452 00:05:13.914 killing process with pid 60452 00:05:13.914 15:28:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:13.914 15:28:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:13.914 15:28:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60452' 00:05:13.914 15:28:15 -- common/autotest_common.sh@955 -- # kill 60452 00:05:13.914 15:28:15 -- common/autotest_common.sh@960 -- # wait 60452 00:05:14.481 15:28:15 -- event/cpu_locks.sh@16 -- # [[ -z 60470 ]] 00:05:14.481 15:28:15 -- event/cpu_locks.sh@16 -- # killprocess 60470 00:05:14.481 15:28:15 -- common/autotest_common.sh@936 -- # '[' -z 60470 ']' 00:05:14.481 15:28:15 -- common/autotest_common.sh@940 -- # kill -0 60470 00:05:14.481 15:28:15 -- common/autotest_common.sh@941 -- # uname 00:05:14.481 15:28:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:14.740 15:28:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60470 00:05:14.740 killing process with pid 60470 00:05:14.740 15:28:15 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:14.740 15:28:15 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:14.740 15:28:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60470' 00:05:14.740 15:28:15 -- common/autotest_common.sh@955 -- # kill 60470 00:05:14.740 15:28:15 -- common/autotest_common.sh@960 -- # wait 60470 00:05:15.308 15:28:16 -- event/cpu_locks.sh@18 -- # rm -f 00:05:15.308 15:28:16 -- event/cpu_locks.sh@1 -- # cleanup 00:05:15.308 15:28:16 -- event/cpu_locks.sh@15 -- # [[ -z 60452 ]] 00:05:15.308 15:28:16 -- event/cpu_locks.sh@15 -- # killprocess 60452 00:05:15.308 Process with pid 60452 is not found 00:05:15.308 Process with pid 60470 is not found 00:05:15.308 15:28:16 -- common/autotest_common.sh@936 -- # '[' -z 60452 ']' 00:05:15.308 15:28:16 -- common/autotest_common.sh@940 -- # kill -0 60452 00:05:15.308 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (60452) - No such process 00:05:15.308 15:28:16 -- common/autotest_common.sh@963 -- # echo 'Process with pid 60452 is not found' 00:05:15.308 15:28:16 -- event/cpu_locks.sh@16 -- # [[ -z 60470 ]] 00:05:15.308 15:28:16 -- event/cpu_locks.sh@16 -- # killprocess 60470 00:05:15.308 15:28:16 -- common/autotest_common.sh@936 -- # '[' -z 60470 ']' 00:05:15.308 15:28:16 -- common/autotest_common.sh@940 -- # kill -0 60470 00:05:15.308 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (60470) - No such process 00:05:15.308 15:28:16 -- common/autotest_common.sh@963 -- # echo 'Process with pid 60470 is not found' 00:05:15.308 15:28:16 -- event/cpu_locks.sh@18 -- # rm -f 00:05:15.308 ************************************ 00:05:15.308 END TEST cpu_locks 00:05:15.308 ************************************ 00:05:15.308 00:05:15.308 real 0m24.689s 00:05:15.308 user 0m40.072s 00:05:15.308 sys 0m6.849s 00:05:15.308 15:28:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:15.308 15:28:16 -- common/autotest_common.sh@10 -- # set +x 00:05:15.308 ************************************ 00:05:15.308 END TEST event 00:05:15.308 ************************************ 00:05:15.308 00:05:15.308 real 0m54.019s 00:05:15.308 user 1m38.631s 00:05:15.308 sys 0m11.151s 00:05:15.308 15:28:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:15.308 15:28:16 -- common/autotest_common.sh@10 -- # set +x 00:05:15.308 15:28:16 -- spdk/autotest.sh@177 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:15.308 15:28:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:15.308 15:28:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:15.308 15:28:16 -- common/autotest_common.sh@10 -- # set +x 00:05:15.567 ************************************ 00:05:15.567 START TEST thread 00:05:15.567 ************************************ 00:05:15.568 15:28:16 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:15.568 * Looking for test storage... 00:05:15.568 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:15.568 15:28:16 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:15.568 15:28:16 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:15.568 15:28:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:15.568 15:28:16 -- common/autotest_common.sh@10 -- # set +x 00:05:15.568 ************************************ 00:05:15.568 START TEST thread_poller_perf 00:05:15.568 ************************************ 00:05:15.568 15:28:16 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:15.568 [2024-04-17 15:28:16.949559] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:05:15.568 [2024-04-17 15:28:16.949652] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60612 ] 00:05:15.828 [2024-04-17 15:28:17.090700] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.828 [2024-04-17 15:28:17.265706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.828 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:17.205 ====================================== 00:05:17.205 busy:2209515783 (cyc) 00:05:17.205 total_run_count: 287000 00:05:17.205 tsc_hz: 2200000000 (cyc) 00:05:17.205 ====================================== 00:05:17.205 poller_cost: 7698 (cyc), 3499 (nsec) 00:05:17.205 00:05:17.205 real 0m1.516s 00:05:17.205 user 0m1.330s 00:05:17.205 sys 0m0.075s 00:05:17.205 15:28:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:17.205 15:28:18 -- common/autotest_common.sh@10 -- # set +x 00:05:17.205 ************************************ 00:05:17.205 END TEST thread_poller_perf 00:05:17.205 ************************************ 00:05:17.205 15:28:18 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:17.205 15:28:18 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:17.205 15:28:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.205 15:28:18 -- common/autotest_common.sh@10 -- # set +x 00:05:17.205 ************************************ 00:05:17.205 START TEST thread_poller_perf 00:05:17.205 ************************************ 00:05:17.205 15:28:18 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:17.205 [2024-04-17 15:28:18.588571] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:05:17.205 [2024-04-17 15:28:18.588652] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60653 ] 00:05:17.464 [2024-04-17 15:28:18.723189] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.464 [2024-04-17 15:28:18.869628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.464 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:18.843 ====================================== 00:05:18.843 busy:2202775640 (cyc) 00:05:18.843 total_run_count: 4261000 00:05:18.843 tsc_hz: 2200000000 (cyc) 00:05:18.843 ====================================== 00:05:18.843 poller_cost: 516 (cyc), 234 (nsec) 00:05:18.843 00:05:18.843 real 0m1.458s 00:05:18.843 user 0m1.277s 00:05:18.843 sys 0m0.071s 00:05:18.843 15:28:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:18.843 ************************************ 00:05:18.843 END TEST thread_poller_perf 00:05:18.843 15:28:20 -- common/autotest_common.sh@10 -- # set +x 00:05:18.843 ************************************ 00:05:18.843 15:28:20 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:18.843 ************************************ 00:05:18.843 END TEST thread 00:05:18.843 ************************************ 00:05:18.843 00:05:18.843 real 0m3.309s 00:05:18.843 user 0m2.709s 00:05:18.843 sys 0m0.344s 00:05:18.843 15:28:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:18.843 15:28:20 -- common/autotest_common.sh@10 -- # set +x 00:05:18.843 15:28:20 -- spdk/autotest.sh@178 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:18.843 15:28:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:18.843 15:28:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.843 15:28:20 -- common/autotest_common.sh@10 -- # set +x 00:05:18.843 ************************************ 00:05:18.843 START TEST accel 00:05:18.843 ************************************ 00:05:18.843 15:28:20 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:18.843 * Looking for test storage... 00:05:18.843 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:18.843 15:28:20 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:18.843 15:28:20 -- accel/accel.sh@82 -- # get_expected_opcs 00:05:18.843 15:28:20 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:19.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.102 15:28:20 -- accel/accel.sh@62 -- # spdk_tgt_pid=60734 00:05:19.102 15:28:20 -- accel/accel.sh@63 -- # waitforlisten 60734 00:05:19.102 15:28:20 -- common/autotest_common.sh@817 -- # '[' -z 60734 ']' 00:05:19.102 15:28:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.102 15:28:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:19.102 15:28:20 -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:19.102 15:28:20 -- accel/accel.sh@61 -- # build_accel_config 00:05:19.102 15:28:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.102 15:28:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:19.102 15:28:20 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:19.102 15:28:20 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:19.102 15:28:20 -- common/autotest_common.sh@10 -- # set +x 00:05:19.102 15:28:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:19.102 15:28:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:19.102 15:28:20 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:19.102 15:28:20 -- accel/accel.sh@40 -- # local IFS=, 00:05:19.102 15:28:20 -- accel/accel.sh@41 -- # jq -r . 00:05:19.102 [2024-04-17 15:28:20.341467] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:05:19.102 [2024-04-17 15:28:20.341797] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60734 ] 00:05:19.102 [2024-04-17 15:28:20.475300] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.361 [2024-04-17 15:28:20.598148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.937 15:28:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:19.937 15:28:21 -- common/autotest_common.sh@850 -- # return 0 00:05:19.937 15:28:21 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:19.937 15:28:21 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:19.937 15:28:21 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:19.937 15:28:21 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:19.937 15:28:21 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:19.937 15:28:21 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:19.937 15:28:21 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:19.937 15:28:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:19.937 15:28:21 -- common/autotest_common.sh@10 -- # set +x 00:05:19.937 15:28:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:20.196 15:28:21 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:20.196 15:28:21 -- accel/accel.sh@72 -- # IFS== 00:05:20.196 15:28:21 -- accel/accel.sh@72 -- # read -r opc module 00:05:20.196 15:28:21 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:20.196 15:28:21 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:20.196 15:28:21 -- accel/accel.sh@72 -- # IFS== 00:05:20.196 15:28:21 -- accel/accel.sh@72 -- # read -r opc module 00:05:20.196 15:28:21 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:20.196 15:28:21 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:20.196 15:28:21 -- accel/accel.sh@72 -- # IFS== 00:05:20.196 15:28:21 -- accel/accel.sh@72 -- # read -r opc module 00:05:20.196 15:28:21 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:20.196 15:28:21 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:20.196 15:28:21 -- accel/accel.sh@72 -- # IFS== 00:05:20.196 15:28:21 -- accel/accel.sh@72 -- # read -r opc module 00:05:20.196 15:28:21 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:20.196 15:28:21 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:20.196 15:28:21 -- accel/accel.sh@72 -- # IFS== 00:05:20.196 15:28:21 -- accel/accel.sh@72 -- # read -r opc module 00:05:20.196 15:28:21 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:20.196 15:28:21 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:20.196 15:28:21 -- accel/accel.sh@72 -- # IFS== 00:05:20.196 15:28:21 -- accel/accel.sh@72 -- # read -r opc module 00:05:20.196 15:28:21 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:20.196 15:28:21 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:20.196 15:28:21 -- accel/accel.sh@72 -- # IFS== 00:05:20.196 15:28:21 -- accel/accel.sh@72 -- # read -r opc module 00:05:20.196 15:28:21 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:20.196 15:28:21 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:20.196 15:28:21 -- accel/accel.sh@72 -- # IFS== 00:05:20.196 15:28:21 -- accel/accel.sh@72 -- # read -r opc module 00:05:20.196 15:28:21 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:20.196 15:28:21 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:20.196 15:28:21 -- accel/accel.sh@72 -- # IFS== 00:05:20.196 15:28:21 -- accel/accel.sh@72 -- # read -r opc module 00:05:20.196 15:28:21 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:20.196 15:28:21 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:20.196 15:28:21 -- accel/accel.sh@72 -- # IFS== 00:05:20.196 15:28:21 -- accel/accel.sh@72 -- # read -r opc module 00:05:20.196 15:28:21 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:20.196 15:28:21 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:20.196 15:28:21 -- accel/accel.sh@72 -- # IFS== 00:05:20.196 15:28:21 -- accel/accel.sh@72 -- # read -r opc module 00:05:20.196 15:28:21 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:20.196 15:28:21 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:20.196 15:28:21 -- accel/accel.sh@72 -- # IFS== 00:05:20.196 15:28:21 -- accel/accel.sh@72 -- # read -r opc module 00:05:20.196 15:28:21 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:20.196 15:28:21 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:20.196 15:28:21 -- accel/accel.sh@72 -- # IFS== 00:05:20.196 15:28:21 -- accel/accel.sh@72 -- # read -r opc module 00:05:20.196 15:28:21 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:20.196 15:28:21 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:20.196 15:28:21 -- accel/accel.sh@72 -- # IFS== 00:05:20.196 15:28:21 -- accel/accel.sh@72 -- # read -r opc module 00:05:20.196 15:28:21 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:20.196 15:28:21 -- accel/accel.sh@75 -- # killprocess 60734 00:05:20.196 15:28:21 -- common/autotest_common.sh@936 -- # '[' -z 60734 ']' 00:05:20.196 15:28:21 -- common/autotest_common.sh@940 -- # kill -0 60734 00:05:20.196 15:28:21 -- common/autotest_common.sh@941 -- # uname 00:05:20.196 15:28:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:20.196 15:28:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60734 00:05:20.196 15:28:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:20.196 15:28:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:20.196 15:28:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60734' 00:05:20.196 killing process with pid 60734 00:05:20.196 15:28:21 -- common/autotest_common.sh@955 -- # kill 60734 00:05:20.196 15:28:21 -- common/autotest_common.sh@960 -- # wait 60734 00:05:20.763 15:28:22 -- accel/accel.sh@76 -- # trap - ERR 00:05:20.763 15:28:22 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:20.763 15:28:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:20.763 15:28:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:20.763 15:28:22 -- common/autotest_common.sh@10 -- # set +x 00:05:20.763 15:28:22 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:05:20.763 15:28:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:20.763 15:28:22 -- accel/accel.sh@12 -- # build_accel_config 00:05:20.763 15:28:22 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:20.763 15:28:22 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:20.763 15:28:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:20.763 15:28:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:20.763 15:28:22 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:20.763 15:28:22 -- accel/accel.sh@40 -- # local IFS=, 00:05:20.763 15:28:22 -- accel/accel.sh@41 -- # jq -r . 00:05:20.763 15:28:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:20.763 15:28:22 -- common/autotest_common.sh@10 -- # set +x 00:05:21.022 15:28:22 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:21.022 15:28:22 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:21.022 15:28:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:21.022 15:28:22 -- common/autotest_common.sh@10 -- # set +x 00:05:21.022 ************************************ 00:05:21.022 START TEST accel_missing_filename 00:05:21.022 ************************************ 00:05:21.022 15:28:22 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:05:21.022 15:28:22 -- common/autotest_common.sh@638 -- # local es=0 00:05:21.022 15:28:22 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:21.022 15:28:22 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:21.022 15:28:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:21.022 15:28:22 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:21.022 15:28:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:21.022 15:28:22 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:05:21.022 15:28:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:21.022 15:28:22 -- accel/accel.sh@12 -- # build_accel_config 00:05:21.022 15:28:22 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:21.022 15:28:22 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:21.022 15:28:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:21.022 15:28:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:21.022 15:28:22 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:21.022 15:28:22 -- accel/accel.sh@40 -- # local IFS=, 00:05:21.022 15:28:22 -- accel/accel.sh@41 -- # jq -r . 00:05:21.022 [2024-04-17 15:28:22.313312] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:05:21.022 [2024-04-17 15:28:22.313409] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60799 ] 00:05:21.022 [2024-04-17 15:28:22.454909] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.281 [2024-04-17 15:28:22.615223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.281 [2024-04-17 15:28:22.692728] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:21.540 [2024-04-17 15:28:22.805124] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:05:21.540 A filename is required. 00:05:21.540 15:28:22 -- common/autotest_common.sh@641 -- # es=234 00:05:21.540 15:28:22 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:21.540 15:28:22 -- common/autotest_common.sh@650 -- # es=106 00:05:21.540 15:28:22 -- common/autotest_common.sh@651 -- # case "$es" in 00:05:21.540 15:28:22 -- common/autotest_common.sh@658 -- # es=1 00:05:21.540 15:28:22 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:21.540 00:05:21.540 real 0m0.670s 00:05:21.540 user 0m0.466s 00:05:21.540 sys 0m0.147s 00:05:21.540 15:28:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:21.540 ************************************ 00:05:21.540 END TEST accel_missing_filename 00:05:21.540 ************************************ 00:05:21.540 15:28:22 -- common/autotest_common.sh@10 -- # set +x 00:05:21.799 15:28:22 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:21.799 15:28:22 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:05:21.799 15:28:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:21.799 15:28:22 -- common/autotest_common.sh@10 -- # set +x 00:05:21.799 ************************************ 00:05:21.799 START TEST accel_compress_verify 00:05:21.799 ************************************ 00:05:21.799 15:28:23 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:21.799 15:28:23 -- common/autotest_common.sh@638 -- # local es=0 00:05:21.799 15:28:23 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:21.799 15:28:23 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:21.799 15:28:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:21.799 15:28:23 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:21.799 15:28:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:21.799 15:28:23 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:21.799 15:28:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:21.799 15:28:23 -- accel/accel.sh@12 -- # build_accel_config 00:05:21.799 15:28:23 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:21.799 15:28:23 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:21.799 15:28:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:21.799 15:28:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:21.799 15:28:23 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:21.799 15:28:23 -- accel/accel.sh@40 -- # local IFS=, 00:05:21.799 15:28:23 -- accel/accel.sh@41 -- # jq -r . 00:05:21.799 [2024-04-17 15:28:23.105840] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:05:21.799 [2024-04-17 15:28:23.105943] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60832 ] 00:05:22.058 [2024-04-17 15:28:23.243441] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.058 [2024-04-17 15:28:23.363719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.058 [2024-04-17 15:28:23.441186] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:22.318 [2024-04-17 15:28:23.550628] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:05:22.318 00:05:22.318 Compression does not support the verify option, aborting. 00:05:22.318 15:28:23 -- common/autotest_common.sh@641 -- # es=161 00:05:22.318 15:28:23 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:22.318 15:28:23 -- common/autotest_common.sh@650 -- # es=33 00:05:22.318 15:28:23 -- common/autotest_common.sh@651 -- # case "$es" in 00:05:22.318 15:28:23 -- common/autotest_common.sh@658 -- # es=1 00:05:22.318 15:28:23 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:22.318 00:05:22.318 real 0m0.626s 00:05:22.318 user 0m0.421s 00:05:22.318 sys 0m0.151s 00:05:22.318 15:28:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:22.318 15:28:23 -- common/autotest_common.sh@10 -- # set +x 00:05:22.318 ************************************ 00:05:22.318 END TEST accel_compress_verify 00:05:22.318 ************************************ 00:05:22.318 15:28:23 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:22.318 15:28:23 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:22.318 15:28:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:22.318 15:28:23 -- common/autotest_common.sh@10 -- # set +x 00:05:22.578 ************************************ 00:05:22.578 START TEST accel_wrong_workload 00:05:22.578 ************************************ 00:05:22.578 15:28:23 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:05:22.578 15:28:23 -- common/autotest_common.sh@638 -- # local es=0 00:05:22.578 15:28:23 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:22.578 15:28:23 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:22.578 15:28:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:22.578 15:28:23 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:22.578 15:28:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:22.578 15:28:23 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:05:22.578 15:28:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:22.578 15:28:23 -- accel/accel.sh@12 -- # build_accel_config 00:05:22.578 15:28:23 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:22.578 15:28:23 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:22.578 15:28:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:22.578 15:28:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:22.578 15:28:23 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:22.578 15:28:23 -- accel/accel.sh@40 -- # local IFS=, 00:05:22.578 15:28:23 -- accel/accel.sh@41 -- # jq -r . 00:05:22.578 Unsupported workload type: foobar 00:05:22.578 [2024-04-17 15:28:23.853899] app.c:1339:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:22.578 accel_perf options: 00:05:22.578 [-h help message] 00:05:22.578 [-q queue depth per core] 00:05:22.578 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:22.578 [-T number of threads per core 00:05:22.578 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:22.578 [-t time in seconds] 00:05:22.578 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:22.578 [ dif_verify, , dif_generate, dif_generate_copy 00:05:22.578 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:22.578 [-l for compress/decompress workloads, name of uncompressed input file 00:05:22.578 [-S for crc32c workload, use this seed value (default 0) 00:05:22.578 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:22.578 [-f for fill workload, use this BYTE value (default 255) 00:05:22.578 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:22.578 [-y verify result if this switch is on] 00:05:22.578 [-a tasks to allocate per core (default: same value as -q)] 00:05:22.578 Can be used to spread operations across a wider range of memory. 00:05:22.578 15:28:23 -- common/autotest_common.sh@641 -- # es=1 00:05:22.578 15:28:23 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:22.578 15:28:23 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:22.578 15:28:23 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:22.578 00:05:22.578 real 0m0.033s 00:05:22.578 user 0m0.021s 00:05:22.578 sys 0m0.012s 00:05:22.578 ************************************ 00:05:22.578 END TEST accel_wrong_workload 00:05:22.578 ************************************ 00:05:22.578 15:28:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:22.578 15:28:23 -- common/autotest_common.sh@10 -- # set +x 00:05:22.578 15:28:23 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:22.578 15:28:23 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:05:22.578 15:28:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:22.578 15:28:23 -- common/autotest_common.sh@10 -- # set +x 00:05:22.578 ************************************ 00:05:22.578 START TEST accel_negative_buffers 00:05:22.578 ************************************ 00:05:22.578 15:28:23 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:22.578 15:28:23 -- common/autotest_common.sh@638 -- # local es=0 00:05:22.578 15:28:23 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:22.578 15:28:23 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:22.578 15:28:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:22.578 15:28:23 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:22.578 15:28:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:22.578 15:28:23 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:05:22.578 15:28:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:22.578 15:28:23 -- accel/accel.sh@12 -- # build_accel_config 00:05:22.578 15:28:23 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:22.578 15:28:23 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:22.578 15:28:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:22.578 15:28:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:22.578 15:28:23 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:22.578 15:28:23 -- accel/accel.sh@40 -- # local IFS=, 00:05:22.578 15:28:23 -- accel/accel.sh@41 -- # jq -r . 00:05:22.578 -x option must be non-negative. 00:05:22.578 [2024-04-17 15:28:24.019032] app.c:1339:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:22.852 accel_perf options: 00:05:22.852 [-h help message] 00:05:22.852 [-q queue depth per core] 00:05:22.852 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:22.852 [-T number of threads per core 00:05:22.852 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:22.852 [-t time in seconds] 00:05:22.852 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:22.852 [ dif_verify, , dif_generate, dif_generate_copy 00:05:22.852 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:22.852 [-l for compress/decompress workloads, name of uncompressed input file 00:05:22.852 [-S for crc32c workload, use this seed value (default 0) 00:05:22.852 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:22.852 [-f for fill workload, use this BYTE value (default 255) 00:05:22.852 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:22.852 [-y verify result if this switch is on] 00:05:22.852 [-a tasks to allocate per core (default: same value as -q)] 00:05:22.852 Can be used to spread operations across a wider range of memory. 00:05:22.852 15:28:24 -- common/autotest_common.sh@641 -- # es=1 00:05:22.852 15:28:24 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:22.852 15:28:24 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:22.852 15:28:24 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:22.852 00:05:22.852 real 0m0.034s 00:05:22.852 user 0m0.021s 00:05:22.852 sys 0m0.013s 00:05:22.852 15:28:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:22.852 ************************************ 00:05:22.852 END TEST accel_negative_buffers 00:05:22.852 ************************************ 00:05:22.852 15:28:24 -- common/autotest_common.sh@10 -- # set +x 00:05:22.852 15:28:24 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:22.853 15:28:24 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:22.853 15:28:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:22.853 15:28:24 -- common/autotest_common.sh@10 -- # set +x 00:05:22.853 ************************************ 00:05:22.853 START TEST accel_crc32c 00:05:22.853 ************************************ 00:05:22.853 15:28:24 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:22.853 15:28:24 -- accel/accel.sh@16 -- # local accel_opc 00:05:22.853 15:28:24 -- accel/accel.sh@17 -- # local accel_module 00:05:22.853 15:28:24 -- accel/accel.sh@19 -- # IFS=: 00:05:22.853 15:28:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:22.853 15:28:24 -- accel/accel.sh@19 -- # read -r var val 00:05:22.853 15:28:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:22.853 15:28:24 -- accel/accel.sh@12 -- # build_accel_config 00:05:22.853 15:28:24 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:22.853 15:28:24 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:22.853 15:28:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:22.853 15:28:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:22.853 15:28:24 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:22.853 15:28:24 -- accel/accel.sh@40 -- # local IFS=, 00:05:22.853 15:28:24 -- accel/accel.sh@41 -- # jq -r . 00:05:22.853 [2024-04-17 15:28:24.175635] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:05:22.853 [2024-04-17 15:28:24.175725] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60909 ] 00:05:23.135 [2024-04-17 15:28:24.315103] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.135 [2024-04-17 15:28:24.464972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.135 15:28:24 -- accel/accel.sh@20 -- # val= 00:05:23.135 15:28:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.135 15:28:24 -- accel/accel.sh@19 -- # IFS=: 00:05:23.135 15:28:24 -- accel/accel.sh@19 -- # read -r var val 00:05:23.135 15:28:24 -- accel/accel.sh@20 -- # val= 00:05:23.135 15:28:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.135 15:28:24 -- accel/accel.sh@19 -- # IFS=: 00:05:23.135 15:28:24 -- accel/accel.sh@19 -- # read -r var val 00:05:23.135 15:28:24 -- accel/accel.sh@20 -- # val=0x1 00:05:23.135 15:28:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.135 15:28:24 -- accel/accel.sh@19 -- # IFS=: 00:05:23.135 15:28:24 -- accel/accel.sh@19 -- # read -r var val 00:05:23.135 15:28:24 -- accel/accel.sh@20 -- # val= 00:05:23.135 15:28:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.135 15:28:24 -- accel/accel.sh@19 -- # IFS=: 00:05:23.135 15:28:24 -- accel/accel.sh@19 -- # read -r var val 00:05:23.135 15:28:24 -- accel/accel.sh@20 -- # val= 00:05:23.135 15:28:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.135 15:28:24 -- accel/accel.sh@19 -- # IFS=: 00:05:23.135 15:28:24 -- accel/accel.sh@19 -- # read -r var val 00:05:23.135 15:28:24 -- accel/accel.sh@20 -- # val=crc32c 00:05:23.135 15:28:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.135 15:28:24 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:23.135 15:28:24 -- accel/accel.sh@19 -- # IFS=: 00:05:23.135 15:28:24 -- accel/accel.sh@19 -- # read -r var val 00:05:23.135 15:28:24 -- accel/accel.sh@20 -- # val=32 00:05:23.135 15:28:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.135 15:28:24 -- accel/accel.sh@19 -- # IFS=: 00:05:23.135 15:28:24 -- accel/accel.sh@19 -- # read -r var val 00:05:23.135 15:28:24 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:23.135 15:28:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.135 15:28:24 -- accel/accel.sh@19 -- # IFS=: 00:05:23.135 15:28:24 -- accel/accel.sh@19 -- # read -r var val 00:05:23.135 15:28:24 -- accel/accel.sh@20 -- # val= 00:05:23.135 15:28:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.135 15:28:24 -- accel/accel.sh@19 -- # IFS=: 00:05:23.135 15:28:24 -- accel/accel.sh@19 -- # read -r var val 00:05:23.135 15:28:24 -- accel/accel.sh@20 -- # val=software 00:05:23.135 15:28:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.135 15:28:24 -- accel/accel.sh@22 -- # accel_module=software 00:05:23.135 15:28:24 -- accel/accel.sh@19 -- # IFS=: 00:05:23.135 15:28:24 -- accel/accel.sh@19 -- # read -r var val 00:05:23.135 15:28:24 -- accel/accel.sh@20 -- # val=32 00:05:23.135 15:28:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.135 15:28:24 -- accel/accel.sh@19 -- # IFS=: 00:05:23.135 15:28:24 -- accel/accel.sh@19 -- # read -r var val 00:05:23.135 15:28:24 -- accel/accel.sh@20 -- # val=32 00:05:23.135 15:28:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.135 15:28:24 -- accel/accel.sh@19 -- # IFS=: 00:05:23.135 15:28:24 -- accel/accel.sh@19 -- # read -r var val 00:05:23.135 15:28:24 -- accel/accel.sh@20 -- # val=1 00:05:23.135 15:28:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.135 15:28:24 -- accel/accel.sh@19 -- # IFS=: 00:05:23.135 15:28:24 -- accel/accel.sh@19 -- # read -r var val 00:05:23.135 15:28:24 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:23.135 15:28:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.135 15:28:24 -- accel/accel.sh@19 -- # IFS=: 00:05:23.135 15:28:24 -- accel/accel.sh@19 -- # read -r var val 00:05:23.135 15:28:24 -- accel/accel.sh@20 -- # val=Yes 00:05:23.135 15:28:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.135 15:28:24 -- accel/accel.sh@19 -- # IFS=: 00:05:23.135 15:28:24 -- accel/accel.sh@19 -- # read -r var val 00:05:23.135 15:28:24 -- accel/accel.sh@20 -- # val= 00:05:23.135 15:28:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.135 15:28:24 -- accel/accel.sh@19 -- # IFS=: 00:05:23.135 15:28:24 -- accel/accel.sh@19 -- # read -r var val 00:05:23.135 15:28:24 -- accel/accel.sh@20 -- # val= 00:05:23.135 15:28:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.135 15:28:24 -- accel/accel.sh@19 -- # IFS=: 00:05:23.135 15:28:24 -- accel/accel.sh@19 -- # read -r var val 00:05:24.511 15:28:25 -- accel/accel.sh@20 -- # val= 00:05:24.511 15:28:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.511 15:28:25 -- accel/accel.sh@19 -- # IFS=: 00:05:24.511 15:28:25 -- accel/accel.sh@19 -- # read -r var val 00:05:24.511 15:28:25 -- accel/accel.sh@20 -- # val= 00:05:24.511 15:28:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.511 15:28:25 -- accel/accel.sh@19 -- # IFS=: 00:05:24.511 15:28:25 -- accel/accel.sh@19 -- # read -r var val 00:05:24.511 15:28:25 -- accel/accel.sh@20 -- # val= 00:05:24.511 15:28:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.511 15:28:25 -- accel/accel.sh@19 -- # IFS=: 00:05:24.511 15:28:25 -- accel/accel.sh@19 -- # read -r var val 00:05:24.511 15:28:25 -- accel/accel.sh@20 -- # val= 00:05:24.511 15:28:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.511 15:28:25 -- accel/accel.sh@19 -- # IFS=: 00:05:24.511 15:28:25 -- accel/accel.sh@19 -- # read -r var val 00:05:24.511 15:28:25 -- accel/accel.sh@20 -- # val= 00:05:24.511 15:28:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.511 15:28:25 -- accel/accel.sh@19 -- # IFS=: 00:05:24.511 15:28:25 -- accel/accel.sh@19 -- # read -r var val 00:05:24.511 15:28:25 -- accel/accel.sh@20 -- # val= 00:05:24.511 15:28:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.511 15:28:25 -- accel/accel.sh@19 -- # IFS=: 00:05:24.511 15:28:25 -- accel/accel.sh@19 -- # read -r var val 00:05:24.511 15:28:25 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:24.511 15:28:25 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:24.511 15:28:25 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:24.511 00:05:24.511 real 0m1.654s 00:05:24.511 user 0m1.409s 00:05:24.511 sys 0m0.144s 00:05:24.511 15:28:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:24.511 15:28:25 -- common/autotest_common.sh@10 -- # set +x 00:05:24.511 ************************************ 00:05:24.511 END TEST accel_crc32c 00:05:24.511 ************************************ 00:05:24.511 15:28:25 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:24.511 15:28:25 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:24.511 15:28:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:24.511 15:28:25 -- common/autotest_common.sh@10 -- # set +x 00:05:24.511 ************************************ 00:05:24.511 START TEST accel_crc32c_C2 00:05:24.511 ************************************ 00:05:24.511 15:28:25 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:24.511 15:28:25 -- accel/accel.sh@16 -- # local accel_opc 00:05:24.511 15:28:25 -- accel/accel.sh@17 -- # local accel_module 00:05:24.511 15:28:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:24.511 15:28:25 -- accel/accel.sh@19 -- # IFS=: 00:05:24.511 15:28:25 -- accel/accel.sh@19 -- # read -r var val 00:05:24.511 15:28:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:24.511 15:28:25 -- accel/accel.sh@12 -- # build_accel_config 00:05:24.511 15:28:25 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:24.511 15:28:25 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:24.511 15:28:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:24.511 15:28:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:24.511 15:28:25 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:24.511 15:28:25 -- accel/accel.sh@40 -- # local IFS=, 00:05:24.511 15:28:25 -- accel/accel.sh@41 -- # jq -r . 00:05:24.511 [2024-04-17 15:28:25.952479] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:05:24.511 [2024-04-17 15:28:25.952560] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60948 ] 00:05:24.770 [2024-04-17 15:28:26.091579] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.770 [2024-04-17 15:28:26.195784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.029 15:28:26 -- accel/accel.sh@20 -- # val= 00:05:25.029 15:28:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.029 15:28:26 -- accel/accel.sh@19 -- # IFS=: 00:05:25.029 15:28:26 -- accel/accel.sh@19 -- # read -r var val 00:05:25.029 15:28:26 -- accel/accel.sh@20 -- # val= 00:05:25.029 15:28:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.029 15:28:26 -- accel/accel.sh@19 -- # IFS=: 00:05:25.029 15:28:26 -- accel/accel.sh@19 -- # read -r var val 00:05:25.029 15:28:26 -- accel/accel.sh@20 -- # val=0x1 00:05:25.029 15:28:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.029 15:28:26 -- accel/accel.sh@19 -- # IFS=: 00:05:25.029 15:28:26 -- accel/accel.sh@19 -- # read -r var val 00:05:25.029 15:28:26 -- accel/accel.sh@20 -- # val= 00:05:25.029 15:28:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.029 15:28:26 -- accel/accel.sh@19 -- # IFS=: 00:05:25.029 15:28:26 -- accel/accel.sh@19 -- # read -r var val 00:05:25.029 15:28:26 -- accel/accel.sh@20 -- # val= 00:05:25.029 15:28:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.029 15:28:26 -- accel/accel.sh@19 -- # IFS=: 00:05:25.029 15:28:26 -- accel/accel.sh@19 -- # read -r var val 00:05:25.029 15:28:26 -- accel/accel.sh@20 -- # val=crc32c 00:05:25.029 15:28:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.029 15:28:26 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:25.029 15:28:26 -- accel/accel.sh@19 -- # IFS=: 00:05:25.029 15:28:26 -- accel/accel.sh@19 -- # read -r var val 00:05:25.029 15:28:26 -- accel/accel.sh@20 -- # val=0 00:05:25.029 15:28:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.029 15:28:26 -- accel/accel.sh@19 -- # IFS=: 00:05:25.029 15:28:26 -- accel/accel.sh@19 -- # read -r var val 00:05:25.029 15:28:26 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:25.029 15:28:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.029 15:28:26 -- accel/accel.sh@19 -- # IFS=: 00:05:25.029 15:28:26 -- accel/accel.sh@19 -- # read -r var val 00:05:25.029 15:28:26 -- accel/accel.sh@20 -- # val= 00:05:25.029 15:28:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.029 15:28:26 -- accel/accel.sh@19 -- # IFS=: 00:05:25.029 15:28:26 -- accel/accel.sh@19 -- # read -r var val 00:05:25.029 15:28:26 -- accel/accel.sh@20 -- # val=software 00:05:25.029 15:28:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.029 15:28:26 -- accel/accel.sh@22 -- # accel_module=software 00:05:25.029 15:28:26 -- accel/accel.sh@19 -- # IFS=: 00:05:25.029 15:28:26 -- accel/accel.sh@19 -- # read -r var val 00:05:25.029 15:28:26 -- accel/accel.sh@20 -- # val=32 00:05:25.029 15:28:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.029 15:28:26 -- accel/accel.sh@19 -- # IFS=: 00:05:25.029 15:28:26 -- accel/accel.sh@19 -- # read -r var val 00:05:25.029 15:28:26 -- accel/accel.sh@20 -- # val=32 00:05:25.029 15:28:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.029 15:28:26 -- accel/accel.sh@19 -- # IFS=: 00:05:25.029 15:28:26 -- accel/accel.sh@19 -- # read -r var val 00:05:25.029 15:28:26 -- accel/accel.sh@20 -- # val=1 00:05:25.029 15:28:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.029 15:28:26 -- accel/accel.sh@19 -- # IFS=: 00:05:25.029 15:28:26 -- accel/accel.sh@19 -- # read -r var val 00:05:25.029 15:28:26 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:25.029 15:28:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.029 15:28:26 -- accel/accel.sh@19 -- # IFS=: 00:05:25.029 15:28:26 -- accel/accel.sh@19 -- # read -r var val 00:05:25.029 15:28:26 -- accel/accel.sh@20 -- # val=Yes 00:05:25.029 15:28:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.029 15:28:26 -- accel/accel.sh@19 -- # IFS=: 00:05:25.029 15:28:26 -- accel/accel.sh@19 -- # read -r var val 00:05:25.029 15:28:26 -- accel/accel.sh@20 -- # val= 00:05:25.029 15:28:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.029 15:28:26 -- accel/accel.sh@19 -- # IFS=: 00:05:25.029 15:28:26 -- accel/accel.sh@19 -- # read -r var val 00:05:25.029 15:28:26 -- accel/accel.sh@20 -- # val= 00:05:25.029 15:28:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.029 15:28:26 -- accel/accel.sh@19 -- # IFS=: 00:05:25.029 15:28:26 -- accel/accel.sh@19 -- # read -r var val 00:05:26.406 15:28:27 -- accel/accel.sh@20 -- # val= 00:05:26.406 15:28:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.406 15:28:27 -- accel/accel.sh@19 -- # IFS=: 00:05:26.406 15:28:27 -- accel/accel.sh@19 -- # read -r var val 00:05:26.406 15:28:27 -- accel/accel.sh@20 -- # val= 00:05:26.406 15:28:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.406 15:28:27 -- accel/accel.sh@19 -- # IFS=: 00:05:26.406 15:28:27 -- accel/accel.sh@19 -- # read -r var val 00:05:26.406 15:28:27 -- accel/accel.sh@20 -- # val= 00:05:26.406 15:28:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.406 15:28:27 -- accel/accel.sh@19 -- # IFS=: 00:05:26.406 15:28:27 -- accel/accel.sh@19 -- # read -r var val 00:05:26.406 15:28:27 -- accel/accel.sh@20 -- # val= 00:05:26.406 15:28:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.406 15:28:27 -- accel/accel.sh@19 -- # IFS=: 00:05:26.406 15:28:27 -- accel/accel.sh@19 -- # read -r var val 00:05:26.406 15:28:27 -- accel/accel.sh@20 -- # val= 00:05:26.406 15:28:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.406 15:28:27 -- accel/accel.sh@19 -- # IFS=: 00:05:26.406 15:28:27 -- accel/accel.sh@19 -- # read -r var val 00:05:26.406 15:28:27 -- accel/accel.sh@20 -- # val= 00:05:26.406 15:28:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.406 15:28:27 -- accel/accel.sh@19 -- # IFS=: 00:05:26.406 15:28:27 -- accel/accel.sh@19 -- # read -r var val 00:05:26.406 15:28:27 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:26.406 ************************************ 00:05:26.406 END TEST accel_crc32c_C2 00:05:26.406 ************************************ 00:05:26.406 15:28:27 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:26.406 15:28:27 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:26.406 00:05:26.406 real 0m1.622s 00:05:26.406 user 0m1.366s 00:05:26.406 sys 0m0.153s 00:05:26.406 15:28:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:26.406 15:28:27 -- common/autotest_common.sh@10 -- # set +x 00:05:26.406 15:28:27 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:26.406 15:28:27 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:26.406 15:28:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:26.406 15:28:27 -- common/autotest_common.sh@10 -- # set +x 00:05:26.406 ************************************ 00:05:26.406 START TEST accel_copy 00:05:26.406 ************************************ 00:05:26.406 15:28:27 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:05:26.406 15:28:27 -- accel/accel.sh@16 -- # local accel_opc 00:05:26.406 15:28:27 -- accel/accel.sh@17 -- # local accel_module 00:05:26.406 15:28:27 -- accel/accel.sh@19 -- # IFS=: 00:05:26.406 15:28:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:26.406 15:28:27 -- accel/accel.sh@19 -- # read -r var val 00:05:26.406 15:28:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:26.406 15:28:27 -- accel/accel.sh@12 -- # build_accel_config 00:05:26.406 15:28:27 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:26.406 15:28:27 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:26.406 15:28:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:26.406 15:28:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:26.406 15:28:27 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:26.406 15:28:27 -- accel/accel.sh@40 -- # local IFS=, 00:05:26.406 15:28:27 -- accel/accel.sh@41 -- # jq -r . 00:05:26.406 [2024-04-17 15:28:27.694508] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:05:26.406 [2024-04-17 15:28:27.694628] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60986 ] 00:05:26.406 [2024-04-17 15:28:27.828837] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.665 [2024-04-17 15:28:27.983833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.665 15:28:28 -- accel/accel.sh@20 -- # val= 00:05:26.665 15:28:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.665 15:28:28 -- accel/accel.sh@19 -- # IFS=: 00:05:26.665 15:28:28 -- accel/accel.sh@19 -- # read -r var val 00:05:26.665 15:28:28 -- accel/accel.sh@20 -- # val= 00:05:26.665 15:28:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.665 15:28:28 -- accel/accel.sh@19 -- # IFS=: 00:05:26.665 15:28:28 -- accel/accel.sh@19 -- # read -r var val 00:05:26.665 15:28:28 -- accel/accel.sh@20 -- # val=0x1 00:05:26.665 15:28:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.665 15:28:28 -- accel/accel.sh@19 -- # IFS=: 00:05:26.665 15:28:28 -- accel/accel.sh@19 -- # read -r var val 00:05:26.665 15:28:28 -- accel/accel.sh@20 -- # val= 00:05:26.665 15:28:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.665 15:28:28 -- accel/accel.sh@19 -- # IFS=: 00:05:26.665 15:28:28 -- accel/accel.sh@19 -- # read -r var val 00:05:26.665 15:28:28 -- accel/accel.sh@20 -- # val= 00:05:26.665 15:28:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.665 15:28:28 -- accel/accel.sh@19 -- # IFS=: 00:05:26.665 15:28:28 -- accel/accel.sh@19 -- # read -r var val 00:05:26.665 15:28:28 -- accel/accel.sh@20 -- # val=copy 00:05:26.665 15:28:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.665 15:28:28 -- accel/accel.sh@23 -- # accel_opc=copy 00:05:26.665 15:28:28 -- accel/accel.sh@19 -- # IFS=: 00:05:26.665 15:28:28 -- accel/accel.sh@19 -- # read -r var val 00:05:26.665 15:28:28 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:26.665 15:28:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.665 15:28:28 -- accel/accel.sh@19 -- # IFS=: 00:05:26.665 15:28:28 -- accel/accel.sh@19 -- # read -r var val 00:05:26.665 15:28:28 -- accel/accel.sh@20 -- # val= 00:05:26.665 15:28:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.665 15:28:28 -- accel/accel.sh@19 -- # IFS=: 00:05:26.665 15:28:28 -- accel/accel.sh@19 -- # read -r var val 00:05:26.665 15:28:28 -- accel/accel.sh@20 -- # val=software 00:05:26.665 15:28:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.665 15:28:28 -- accel/accel.sh@22 -- # accel_module=software 00:05:26.665 15:28:28 -- accel/accel.sh@19 -- # IFS=: 00:05:26.665 15:28:28 -- accel/accel.sh@19 -- # read -r var val 00:05:26.665 15:28:28 -- accel/accel.sh@20 -- # val=32 00:05:26.665 15:28:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.665 15:28:28 -- accel/accel.sh@19 -- # IFS=: 00:05:26.665 15:28:28 -- accel/accel.sh@19 -- # read -r var val 00:05:26.665 15:28:28 -- accel/accel.sh@20 -- # val=32 00:05:26.665 15:28:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.665 15:28:28 -- accel/accel.sh@19 -- # IFS=: 00:05:26.665 15:28:28 -- accel/accel.sh@19 -- # read -r var val 00:05:26.665 15:28:28 -- accel/accel.sh@20 -- # val=1 00:05:26.665 15:28:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.665 15:28:28 -- accel/accel.sh@19 -- # IFS=: 00:05:26.665 15:28:28 -- accel/accel.sh@19 -- # read -r var val 00:05:26.665 15:28:28 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:26.665 15:28:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.665 15:28:28 -- accel/accel.sh@19 -- # IFS=: 00:05:26.665 15:28:28 -- accel/accel.sh@19 -- # read -r var val 00:05:26.665 15:28:28 -- accel/accel.sh@20 -- # val=Yes 00:05:26.665 15:28:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.665 15:28:28 -- accel/accel.sh@19 -- # IFS=: 00:05:26.665 15:28:28 -- accel/accel.sh@19 -- # read -r var val 00:05:26.665 15:28:28 -- accel/accel.sh@20 -- # val= 00:05:26.665 15:28:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.665 15:28:28 -- accel/accel.sh@19 -- # IFS=: 00:05:26.665 15:28:28 -- accel/accel.sh@19 -- # read -r var val 00:05:26.665 15:28:28 -- accel/accel.sh@20 -- # val= 00:05:26.665 15:28:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.665 15:28:28 -- accel/accel.sh@19 -- # IFS=: 00:05:26.665 15:28:28 -- accel/accel.sh@19 -- # read -r var val 00:05:28.040 15:28:29 -- accel/accel.sh@20 -- # val= 00:05:28.040 15:28:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.040 15:28:29 -- accel/accel.sh@19 -- # IFS=: 00:05:28.040 15:28:29 -- accel/accel.sh@19 -- # read -r var val 00:05:28.040 15:28:29 -- accel/accel.sh@20 -- # val= 00:05:28.040 15:28:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.040 15:28:29 -- accel/accel.sh@19 -- # IFS=: 00:05:28.040 15:28:29 -- accel/accel.sh@19 -- # read -r var val 00:05:28.040 15:28:29 -- accel/accel.sh@20 -- # val= 00:05:28.040 15:28:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.040 15:28:29 -- accel/accel.sh@19 -- # IFS=: 00:05:28.040 15:28:29 -- accel/accel.sh@19 -- # read -r var val 00:05:28.040 15:28:29 -- accel/accel.sh@20 -- # val= 00:05:28.040 15:28:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.040 15:28:29 -- accel/accel.sh@19 -- # IFS=: 00:05:28.040 15:28:29 -- accel/accel.sh@19 -- # read -r var val 00:05:28.040 15:28:29 -- accel/accel.sh@20 -- # val= 00:05:28.040 15:28:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.040 15:28:29 -- accel/accel.sh@19 -- # IFS=: 00:05:28.040 15:28:29 -- accel/accel.sh@19 -- # read -r var val 00:05:28.040 15:28:29 -- accel/accel.sh@20 -- # val= 00:05:28.040 15:28:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.040 15:28:29 -- accel/accel.sh@19 -- # IFS=: 00:05:28.040 15:28:29 -- accel/accel.sh@19 -- # read -r var val 00:05:28.040 ************************************ 00:05:28.040 END TEST accel_copy 00:05:28.040 ************************************ 00:05:28.040 15:28:29 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:28.040 15:28:29 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:28.040 15:28:29 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:28.040 00:05:28.040 real 0m1.659s 00:05:28.040 user 0m1.419s 00:05:28.040 sys 0m0.140s 00:05:28.040 15:28:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:28.040 15:28:29 -- common/autotest_common.sh@10 -- # set +x 00:05:28.040 15:28:29 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:28.040 15:28:29 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:05:28.040 15:28:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:28.040 15:28:29 -- common/autotest_common.sh@10 -- # set +x 00:05:28.040 ************************************ 00:05:28.040 START TEST accel_fill 00:05:28.040 ************************************ 00:05:28.040 15:28:29 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:28.040 15:28:29 -- accel/accel.sh@16 -- # local accel_opc 00:05:28.040 15:28:29 -- accel/accel.sh@17 -- # local accel_module 00:05:28.040 15:28:29 -- accel/accel.sh@19 -- # IFS=: 00:05:28.040 15:28:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:28.040 15:28:29 -- accel/accel.sh@19 -- # read -r var val 00:05:28.040 15:28:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:28.040 15:28:29 -- accel/accel.sh@12 -- # build_accel_config 00:05:28.040 15:28:29 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:28.040 15:28:29 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:28.040 15:28:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:28.040 15:28:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:28.040 15:28:29 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:28.040 15:28:29 -- accel/accel.sh@40 -- # local IFS=, 00:05:28.040 15:28:29 -- accel/accel.sh@41 -- # jq -r . 00:05:28.040 [2024-04-17 15:28:29.475532] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:05:28.040 [2024-04-17 15:28:29.475612] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61030 ] 00:05:28.299 [2024-04-17 15:28:29.615201] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.557 [2024-04-17 15:28:29.756936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.557 15:28:29 -- accel/accel.sh@20 -- # val= 00:05:28.557 15:28:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.557 15:28:29 -- accel/accel.sh@19 -- # IFS=: 00:05:28.557 15:28:29 -- accel/accel.sh@19 -- # read -r var val 00:05:28.557 15:28:29 -- accel/accel.sh@20 -- # val= 00:05:28.557 15:28:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.557 15:28:29 -- accel/accel.sh@19 -- # IFS=: 00:05:28.557 15:28:29 -- accel/accel.sh@19 -- # read -r var val 00:05:28.557 15:28:29 -- accel/accel.sh@20 -- # val=0x1 00:05:28.557 15:28:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.557 15:28:29 -- accel/accel.sh@19 -- # IFS=: 00:05:28.557 15:28:29 -- accel/accel.sh@19 -- # read -r var val 00:05:28.557 15:28:29 -- accel/accel.sh@20 -- # val= 00:05:28.557 15:28:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.557 15:28:29 -- accel/accel.sh@19 -- # IFS=: 00:05:28.557 15:28:29 -- accel/accel.sh@19 -- # read -r var val 00:05:28.557 15:28:29 -- accel/accel.sh@20 -- # val= 00:05:28.557 15:28:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.557 15:28:29 -- accel/accel.sh@19 -- # IFS=: 00:05:28.557 15:28:29 -- accel/accel.sh@19 -- # read -r var val 00:05:28.557 15:28:29 -- accel/accel.sh@20 -- # val=fill 00:05:28.557 15:28:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.557 15:28:29 -- accel/accel.sh@23 -- # accel_opc=fill 00:05:28.557 15:28:29 -- accel/accel.sh@19 -- # IFS=: 00:05:28.557 15:28:29 -- accel/accel.sh@19 -- # read -r var val 00:05:28.557 15:28:29 -- accel/accel.sh@20 -- # val=0x80 00:05:28.557 15:28:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.557 15:28:29 -- accel/accel.sh@19 -- # IFS=: 00:05:28.557 15:28:29 -- accel/accel.sh@19 -- # read -r var val 00:05:28.557 15:28:29 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:28.557 15:28:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.557 15:28:29 -- accel/accel.sh@19 -- # IFS=: 00:05:28.557 15:28:29 -- accel/accel.sh@19 -- # read -r var val 00:05:28.557 15:28:29 -- accel/accel.sh@20 -- # val= 00:05:28.557 15:28:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.557 15:28:29 -- accel/accel.sh@19 -- # IFS=: 00:05:28.557 15:28:29 -- accel/accel.sh@19 -- # read -r var val 00:05:28.557 15:28:29 -- accel/accel.sh@20 -- # val=software 00:05:28.557 15:28:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.557 15:28:29 -- accel/accel.sh@22 -- # accel_module=software 00:05:28.557 15:28:29 -- accel/accel.sh@19 -- # IFS=: 00:05:28.557 15:28:29 -- accel/accel.sh@19 -- # read -r var val 00:05:28.557 15:28:29 -- accel/accel.sh@20 -- # val=64 00:05:28.557 15:28:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.557 15:28:29 -- accel/accel.sh@19 -- # IFS=: 00:05:28.557 15:28:29 -- accel/accel.sh@19 -- # read -r var val 00:05:28.557 15:28:29 -- accel/accel.sh@20 -- # val=64 00:05:28.558 15:28:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.558 15:28:29 -- accel/accel.sh@19 -- # IFS=: 00:05:28.558 15:28:29 -- accel/accel.sh@19 -- # read -r var val 00:05:28.558 15:28:29 -- accel/accel.sh@20 -- # val=1 00:05:28.558 15:28:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.558 15:28:29 -- accel/accel.sh@19 -- # IFS=: 00:05:28.558 15:28:29 -- accel/accel.sh@19 -- # read -r var val 00:05:28.558 15:28:29 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:28.558 15:28:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.558 15:28:29 -- accel/accel.sh@19 -- # IFS=: 00:05:28.558 15:28:29 -- accel/accel.sh@19 -- # read -r var val 00:05:28.558 15:28:29 -- accel/accel.sh@20 -- # val=Yes 00:05:28.558 15:28:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.558 15:28:29 -- accel/accel.sh@19 -- # IFS=: 00:05:28.558 15:28:29 -- accel/accel.sh@19 -- # read -r var val 00:05:28.558 15:28:29 -- accel/accel.sh@20 -- # val= 00:05:28.558 15:28:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.558 15:28:29 -- accel/accel.sh@19 -- # IFS=: 00:05:28.558 15:28:29 -- accel/accel.sh@19 -- # read -r var val 00:05:28.558 15:28:29 -- accel/accel.sh@20 -- # val= 00:05:28.558 15:28:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.558 15:28:29 -- accel/accel.sh@19 -- # IFS=: 00:05:28.558 15:28:29 -- accel/accel.sh@19 -- # read -r var val 00:05:29.936 15:28:31 -- accel/accel.sh@20 -- # val= 00:05:29.936 15:28:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.936 15:28:31 -- accel/accel.sh@19 -- # IFS=: 00:05:29.936 15:28:31 -- accel/accel.sh@19 -- # read -r var val 00:05:29.936 15:28:31 -- accel/accel.sh@20 -- # val= 00:05:29.936 15:28:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.936 15:28:31 -- accel/accel.sh@19 -- # IFS=: 00:05:29.936 15:28:31 -- accel/accel.sh@19 -- # read -r var val 00:05:29.936 15:28:31 -- accel/accel.sh@20 -- # val= 00:05:29.936 15:28:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.936 15:28:31 -- accel/accel.sh@19 -- # IFS=: 00:05:29.936 15:28:31 -- accel/accel.sh@19 -- # read -r var val 00:05:29.936 15:28:31 -- accel/accel.sh@20 -- # val= 00:05:29.936 15:28:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.936 15:28:31 -- accel/accel.sh@19 -- # IFS=: 00:05:29.936 15:28:31 -- accel/accel.sh@19 -- # read -r var val 00:05:29.936 15:28:31 -- accel/accel.sh@20 -- # val= 00:05:29.936 15:28:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.936 15:28:31 -- accel/accel.sh@19 -- # IFS=: 00:05:29.936 15:28:31 -- accel/accel.sh@19 -- # read -r var val 00:05:29.936 15:28:31 -- accel/accel.sh@20 -- # val= 00:05:29.936 15:28:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.936 15:28:31 -- accel/accel.sh@19 -- # IFS=: 00:05:29.936 15:28:31 -- accel/accel.sh@19 -- # read -r var val 00:05:29.936 15:28:31 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:29.936 ************************************ 00:05:29.936 END TEST accel_fill 00:05:29.936 ************************************ 00:05:29.936 15:28:31 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:29.936 15:28:31 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:29.936 00:05:29.936 real 0m1.659s 00:05:29.936 user 0m1.415s 00:05:29.936 sys 0m0.148s 00:05:29.936 15:28:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:29.936 15:28:31 -- common/autotest_common.sh@10 -- # set +x 00:05:29.936 15:28:31 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:29.936 15:28:31 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:29.936 15:28:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:29.936 15:28:31 -- common/autotest_common.sh@10 -- # set +x 00:05:29.936 ************************************ 00:05:29.936 START TEST accel_copy_crc32c 00:05:29.936 ************************************ 00:05:29.936 15:28:31 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:05:29.936 15:28:31 -- accel/accel.sh@16 -- # local accel_opc 00:05:29.936 15:28:31 -- accel/accel.sh@17 -- # local accel_module 00:05:29.936 15:28:31 -- accel/accel.sh@19 -- # IFS=: 00:05:29.936 15:28:31 -- accel/accel.sh@19 -- # read -r var val 00:05:29.936 15:28:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:29.936 15:28:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:29.936 15:28:31 -- accel/accel.sh@12 -- # build_accel_config 00:05:29.936 15:28:31 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:29.936 15:28:31 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:29.936 15:28:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:29.936 15:28:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:29.936 15:28:31 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:29.936 15:28:31 -- accel/accel.sh@40 -- # local IFS=, 00:05:29.936 15:28:31 -- accel/accel.sh@41 -- # jq -r . 00:05:29.936 [2024-04-17 15:28:31.243608] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:05:29.936 [2024-04-17 15:28:31.243695] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61071 ] 00:05:30.195 [2024-04-17 15:28:31.382890] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.195 [2024-04-17 15:28:31.523112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.195 15:28:31 -- accel/accel.sh@20 -- # val= 00:05:30.195 15:28:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.195 15:28:31 -- accel/accel.sh@19 -- # IFS=: 00:05:30.195 15:28:31 -- accel/accel.sh@19 -- # read -r var val 00:05:30.195 15:28:31 -- accel/accel.sh@20 -- # val= 00:05:30.195 15:28:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.195 15:28:31 -- accel/accel.sh@19 -- # IFS=: 00:05:30.195 15:28:31 -- accel/accel.sh@19 -- # read -r var val 00:05:30.195 15:28:31 -- accel/accel.sh@20 -- # val=0x1 00:05:30.195 15:28:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.195 15:28:31 -- accel/accel.sh@19 -- # IFS=: 00:05:30.195 15:28:31 -- accel/accel.sh@19 -- # read -r var val 00:05:30.195 15:28:31 -- accel/accel.sh@20 -- # val= 00:05:30.195 15:28:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.195 15:28:31 -- accel/accel.sh@19 -- # IFS=: 00:05:30.195 15:28:31 -- accel/accel.sh@19 -- # read -r var val 00:05:30.195 15:28:31 -- accel/accel.sh@20 -- # val= 00:05:30.195 15:28:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.195 15:28:31 -- accel/accel.sh@19 -- # IFS=: 00:05:30.195 15:28:31 -- accel/accel.sh@19 -- # read -r var val 00:05:30.195 15:28:31 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:30.195 15:28:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.195 15:28:31 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:30.195 15:28:31 -- accel/accel.sh@19 -- # IFS=: 00:05:30.195 15:28:31 -- accel/accel.sh@19 -- # read -r var val 00:05:30.195 15:28:31 -- accel/accel.sh@20 -- # val=0 00:05:30.195 15:28:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.195 15:28:31 -- accel/accel.sh@19 -- # IFS=: 00:05:30.195 15:28:31 -- accel/accel.sh@19 -- # read -r var val 00:05:30.195 15:28:31 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:30.195 15:28:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.195 15:28:31 -- accel/accel.sh@19 -- # IFS=: 00:05:30.195 15:28:31 -- accel/accel.sh@19 -- # read -r var val 00:05:30.195 15:28:31 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:30.195 15:28:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.195 15:28:31 -- accel/accel.sh@19 -- # IFS=: 00:05:30.195 15:28:31 -- accel/accel.sh@19 -- # read -r var val 00:05:30.195 15:28:31 -- accel/accel.sh@20 -- # val= 00:05:30.195 15:28:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.195 15:28:31 -- accel/accel.sh@19 -- # IFS=: 00:05:30.195 15:28:31 -- accel/accel.sh@19 -- # read -r var val 00:05:30.195 15:28:31 -- accel/accel.sh@20 -- # val=software 00:05:30.195 15:28:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.195 15:28:31 -- accel/accel.sh@22 -- # accel_module=software 00:05:30.195 15:28:31 -- accel/accel.sh@19 -- # IFS=: 00:05:30.195 15:28:31 -- accel/accel.sh@19 -- # read -r var val 00:05:30.195 15:28:31 -- accel/accel.sh@20 -- # val=32 00:05:30.195 15:28:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.195 15:28:31 -- accel/accel.sh@19 -- # IFS=: 00:05:30.195 15:28:31 -- accel/accel.sh@19 -- # read -r var val 00:05:30.195 15:28:31 -- accel/accel.sh@20 -- # val=32 00:05:30.195 15:28:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.195 15:28:31 -- accel/accel.sh@19 -- # IFS=: 00:05:30.195 15:28:31 -- accel/accel.sh@19 -- # read -r var val 00:05:30.195 15:28:31 -- accel/accel.sh@20 -- # val=1 00:05:30.195 15:28:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.195 15:28:31 -- accel/accel.sh@19 -- # IFS=: 00:05:30.195 15:28:31 -- accel/accel.sh@19 -- # read -r var val 00:05:30.195 15:28:31 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:30.195 15:28:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.195 15:28:31 -- accel/accel.sh@19 -- # IFS=: 00:05:30.195 15:28:31 -- accel/accel.sh@19 -- # read -r var val 00:05:30.195 15:28:31 -- accel/accel.sh@20 -- # val=Yes 00:05:30.195 15:28:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.195 15:28:31 -- accel/accel.sh@19 -- # IFS=: 00:05:30.195 15:28:31 -- accel/accel.sh@19 -- # read -r var val 00:05:30.195 15:28:31 -- accel/accel.sh@20 -- # val= 00:05:30.195 15:28:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.195 15:28:31 -- accel/accel.sh@19 -- # IFS=: 00:05:30.195 15:28:31 -- accel/accel.sh@19 -- # read -r var val 00:05:30.195 15:28:31 -- accel/accel.sh@20 -- # val= 00:05:30.195 15:28:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.195 15:28:31 -- accel/accel.sh@19 -- # IFS=: 00:05:30.195 15:28:31 -- accel/accel.sh@19 -- # read -r var val 00:05:31.573 15:28:32 -- accel/accel.sh@20 -- # val= 00:05:31.573 15:28:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.573 15:28:32 -- accel/accel.sh@19 -- # IFS=: 00:05:31.573 15:28:32 -- accel/accel.sh@19 -- # read -r var val 00:05:31.573 15:28:32 -- accel/accel.sh@20 -- # val= 00:05:31.573 15:28:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.573 15:28:32 -- accel/accel.sh@19 -- # IFS=: 00:05:31.573 15:28:32 -- accel/accel.sh@19 -- # read -r var val 00:05:31.573 15:28:32 -- accel/accel.sh@20 -- # val= 00:05:31.573 15:28:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.573 15:28:32 -- accel/accel.sh@19 -- # IFS=: 00:05:31.573 15:28:32 -- accel/accel.sh@19 -- # read -r var val 00:05:31.573 15:28:32 -- accel/accel.sh@20 -- # val= 00:05:31.573 15:28:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.573 15:28:32 -- accel/accel.sh@19 -- # IFS=: 00:05:31.573 15:28:32 -- accel/accel.sh@19 -- # read -r var val 00:05:31.573 15:28:32 -- accel/accel.sh@20 -- # val= 00:05:31.573 15:28:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.573 15:28:32 -- accel/accel.sh@19 -- # IFS=: 00:05:31.573 15:28:32 -- accel/accel.sh@19 -- # read -r var val 00:05:31.573 15:28:32 -- accel/accel.sh@20 -- # val= 00:05:31.573 15:28:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.573 15:28:32 -- accel/accel.sh@19 -- # IFS=: 00:05:31.573 15:28:32 -- accel/accel.sh@19 -- # read -r var val 00:05:31.573 15:28:32 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:31.573 15:28:32 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:31.573 15:28:32 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:31.573 00:05:31.573 real 0m1.648s 00:05:31.573 user 0m1.411s 00:05:31.573 sys 0m0.140s 00:05:31.573 ************************************ 00:05:31.573 END TEST accel_copy_crc32c 00:05:31.573 ************************************ 00:05:31.573 15:28:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:31.573 15:28:32 -- common/autotest_common.sh@10 -- # set +x 00:05:31.573 15:28:32 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:31.573 15:28:32 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:31.573 15:28:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:31.573 15:28:32 -- common/autotest_common.sh@10 -- # set +x 00:05:31.573 ************************************ 00:05:31.573 START TEST accel_copy_crc32c_C2 00:05:31.573 ************************************ 00:05:31.573 15:28:32 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:31.573 15:28:32 -- accel/accel.sh@16 -- # local accel_opc 00:05:31.573 15:28:32 -- accel/accel.sh@17 -- # local accel_module 00:05:31.573 15:28:32 -- accel/accel.sh@19 -- # IFS=: 00:05:31.573 15:28:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:31.573 15:28:32 -- accel/accel.sh@19 -- # read -r var val 00:05:31.573 15:28:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:31.573 15:28:32 -- accel/accel.sh@12 -- # build_accel_config 00:05:31.573 15:28:32 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:31.573 15:28:32 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:31.573 15:28:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.573 15:28:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.573 15:28:32 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:31.573 15:28:32 -- accel/accel.sh@40 -- # local IFS=, 00:05:31.573 15:28:32 -- accel/accel.sh@41 -- # jq -r . 00:05:31.832 [2024-04-17 15:28:33.016180] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:05:31.832 [2024-04-17 15:28:33.016788] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61116 ] 00:05:31.832 [2024-04-17 15:28:33.164366] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.091 [2024-04-17 15:28:33.306552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.091 15:28:33 -- accel/accel.sh@20 -- # val= 00:05:32.091 15:28:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.091 15:28:33 -- accel/accel.sh@19 -- # IFS=: 00:05:32.091 15:28:33 -- accel/accel.sh@19 -- # read -r var val 00:05:32.091 15:28:33 -- accel/accel.sh@20 -- # val= 00:05:32.091 15:28:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.091 15:28:33 -- accel/accel.sh@19 -- # IFS=: 00:05:32.091 15:28:33 -- accel/accel.sh@19 -- # read -r var val 00:05:32.091 15:28:33 -- accel/accel.sh@20 -- # val=0x1 00:05:32.091 15:28:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.091 15:28:33 -- accel/accel.sh@19 -- # IFS=: 00:05:32.091 15:28:33 -- accel/accel.sh@19 -- # read -r var val 00:05:32.091 15:28:33 -- accel/accel.sh@20 -- # val= 00:05:32.091 15:28:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.091 15:28:33 -- accel/accel.sh@19 -- # IFS=: 00:05:32.091 15:28:33 -- accel/accel.sh@19 -- # read -r var val 00:05:32.091 15:28:33 -- accel/accel.sh@20 -- # val= 00:05:32.091 15:28:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.091 15:28:33 -- accel/accel.sh@19 -- # IFS=: 00:05:32.091 15:28:33 -- accel/accel.sh@19 -- # read -r var val 00:05:32.091 15:28:33 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:32.091 15:28:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.091 15:28:33 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:32.091 15:28:33 -- accel/accel.sh@19 -- # IFS=: 00:05:32.091 15:28:33 -- accel/accel.sh@19 -- # read -r var val 00:05:32.091 15:28:33 -- accel/accel.sh@20 -- # val=0 00:05:32.091 15:28:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.091 15:28:33 -- accel/accel.sh@19 -- # IFS=: 00:05:32.091 15:28:33 -- accel/accel.sh@19 -- # read -r var val 00:05:32.091 15:28:33 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:32.091 15:28:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.091 15:28:33 -- accel/accel.sh@19 -- # IFS=: 00:05:32.091 15:28:33 -- accel/accel.sh@19 -- # read -r var val 00:05:32.091 15:28:33 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:32.091 15:28:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.091 15:28:33 -- accel/accel.sh@19 -- # IFS=: 00:05:32.091 15:28:33 -- accel/accel.sh@19 -- # read -r var val 00:05:32.091 15:28:33 -- accel/accel.sh@20 -- # val= 00:05:32.091 15:28:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.091 15:28:33 -- accel/accel.sh@19 -- # IFS=: 00:05:32.091 15:28:33 -- accel/accel.sh@19 -- # read -r var val 00:05:32.091 15:28:33 -- accel/accel.sh@20 -- # val=software 00:05:32.091 15:28:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.091 15:28:33 -- accel/accel.sh@22 -- # accel_module=software 00:05:32.091 15:28:33 -- accel/accel.sh@19 -- # IFS=: 00:05:32.091 15:28:33 -- accel/accel.sh@19 -- # read -r var val 00:05:32.091 15:28:33 -- accel/accel.sh@20 -- # val=32 00:05:32.091 15:28:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.091 15:28:33 -- accel/accel.sh@19 -- # IFS=: 00:05:32.091 15:28:33 -- accel/accel.sh@19 -- # read -r var val 00:05:32.091 15:28:33 -- accel/accel.sh@20 -- # val=32 00:05:32.091 15:28:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.091 15:28:33 -- accel/accel.sh@19 -- # IFS=: 00:05:32.091 15:28:33 -- accel/accel.sh@19 -- # read -r var val 00:05:32.091 15:28:33 -- accel/accel.sh@20 -- # val=1 00:05:32.091 15:28:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.091 15:28:33 -- accel/accel.sh@19 -- # IFS=: 00:05:32.091 15:28:33 -- accel/accel.sh@19 -- # read -r var val 00:05:32.091 15:28:33 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:32.091 15:28:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.091 15:28:33 -- accel/accel.sh@19 -- # IFS=: 00:05:32.091 15:28:33 -- accel/accel.sh@19 -- # read -r var val 00:05:32.091 15:28:33 -- accel/accel.sh@20 -- # val=Yes 00:05:32.091 15:28:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.091 15:28:33 -- accel/accel.sh@19 -- # IFS=: 00:05:32.091 15:28:33 -- accel/accel.sh@19 -- # read -r var val 00:05:32.091 15:28:33 -- accel/accel.sh@20 -- # val= 00:05:32.091 15:28:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.091 15:28:33 -- accel/accel.sh@19 -- # IFS=: 00:05:32.091 15:28:33 -- accel/accel.sh@19 -- # read -r var val 00:05:32.091 15:28:33 -- accel/accel.sh@20 -- # val= 00:05:32.091 15:28:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.091 15:28:33 -- accel/accel.sh@19 -- # IFS=: 00:05:32.091 15:28:33 -- accel/accel.sh@19 -- # read -r var val 00:05:33.465 15:28:34 -- accel/accel.sh@20 -- # val= 00:05:33.465 15:28:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.465 15:28:34 -- accel/accel.sh@19 -- # IFS=: 00:05:33.465 15:28:34 -- accel/accel.sh@19 -- # read -r var val 00:05:33.465 15:28:34 -- accel/accel.sh@20 -- # val= 00:05:33.465 15:28:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.465 15:28:34 -- accel/accel.sh@19 -- # IFS=: 00:05:33.465 15:28:34 -- accel/accel.sh@19 -- # read -r var val 00:05:33.465 15:28:34 -- accel/accel.sh@20 -- # val= 00:05:33.465 15:28:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.465 15:28:34 -- accel/accel.sh@19 -- # IFS=: 00:05:33.465 15:28:34 -- accel/accel.sh@19 -- # read -r var val 00:05:33.465 15:28:34 -- accel/accel.sh@20 -- # val= 00:05:33.465 15:28:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.465 15:28:34 -- accel/accel.sh@19 -- # IFS=: 00:05:33.465 15:28:34 -- accel/accel.sh@19 -- # read -r var val 00:05:33.465 15:28:34 -- accel/accel.sh@20 -- # val= 00:05:33.465 15:28:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.465 15:28:34 -- accel/accel.sh@19 -- # IFS=: 00:05:33.465 15:28:34 -- accel/accel.sh@19 -- # read -r var val 00:05:33.465 15:28:34 -- accel/accel.sh@20 -- # val= 00:05:33.465 15:28:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.465 15:28:34 -- accel/accel.sh@19 -- # IFS=: 00:05:33.465 15:28:34 -- accel/accel.sh@19 -- # read -r var val 00:05:33.465 15:28:34 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:33.465 15:28:34 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:33.465 15:28:34 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:33.465 00:05:33.465 real 0m1.671s 00:05:33.465 user 0m1.419s 00:05:33.465 sys 0m0.150s 00:05:33.465 15:28:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:33.465 15:28:34 -- common/autotest_common.sh@10 -- # set +x 00:05:33.465 ************************************ 00:05:33.465 END TEST accel_copy_crc32c_C2 00:05:33.465 ************************************ 00:05:33.465 15:28:34 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:33.465 15:28:34 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:33.465 15:28:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.465 15:28:34 -- common/autotest_common.sh@10 -- # set +x 00:05:33.465 ************************************ 00:05:33.465 START TEST accel_dualcast 00:05:33.465 ************************************ 00:05:33.465 15:28:34 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:05:33.465 15:28:34 -- accel/accel.sh@16 -- # local accel_opc 00:05:33.465 15:28:34 -- accel/accel.sh@17 -- # local accel_module 00:05:33.465 15:28:34 -- accel/accel.sh@19 -- # IFS=: 00:05:33.465 15:28:34 -- accel/accel.sh@19 -- # read -r var val 00:05:33.466 15:28:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:33.466 15:28:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:33.466 15:28:34 -- accel/accel.sh@12 -- # build_accel_config 00:05:33.466 15:28:34 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:33.466 15:28:34 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:33.466 15:28:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.466 15:28:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.466 15:28:34 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:33.466 15:28:34 -- accel/accel.sh@40 -- # local IFS=, 00:05:33.466 15:28:34 -- accel/accel.sh@41 -- # jq -r . 00:05:33.466 [2024-04-17 15:28:34.794121] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:05:33.466 [2024-04-17 15:28:34.794246] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61154 ] 00:05:33.724 [2024-04-17 15:28:34.932054] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.724 [2024-04-17 15:28:35.060678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.724 15:28:35 -- accel/accel.sh@20 -- # val= 00:05:33.724 15:28:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.724 15:28:35 -- accel/accel.sh@19 -- # IFS=: 00:05:33.724 15:28:35 -- accel/accel.sh@19 -- # read -r var val 00:05:33.724 15:28:35 -- accel/accel.sh@20 -- # val= 00:05:33.724 15:28:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.724 15:28:35 -- accel/accel.sh@19 -- # IFS=: 00:05:33.724 15:28:35 -- accel/accel.sh@19 -- # read -r var val 00:05:33.724 15:28:35 -- accel/accel.sh@20 -- # val=0x1 00:05:33.724 15:28:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.724 15:28:35 -- accel/accel.sh@19 -- # IFS=: 00:05:33.724 15:28:35 -- accel/accel.sh@19 -- # read -r var val 00:05:33.724 15:28:35 -- accel/accel.sh@20 -- # val= 00:05:33.724 15:28:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.724 15:28:35 -- accel/accel.sh@19 -- # IFS=: 00:05:33.724 15:28:35 -- accel/accel.sh@19 -- # read -r var val 00:05:33.724 15:28:35 -- accel/accel.sh@20 -- # val= 00:05:33.724 15:28:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.724 15:28:35 -- accel/accel.sh@19 -- # IFS=: 00:05:33.724 15:28:35 -- accel/accel.sh@19 -- # read -r var val 00:05:33.724 15:28:35 -- accel/accel.sh@20 -- # val=dualcast 00:05:33.724 15:28:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.724 15:28:35 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:33.724 15:28:35 -- accel/accel.sh@19 -- # IFS=: 00:05:33.724 15:28:35 -- accel/accel.sh@19 -- # read -r var val 00:05:33.724 15:28:35 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:33.724 15:28:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.724 15:28:35 -- accel/accel.sh@19 -- # IFS=: 00:05:33.724 15:28:35 -- accel/accel.sh@19 -- # read -r var val 00:05:33.724 15:28:35 -- accel/accel.sh@20 -- # val= 00:05:33.724 15:28:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.724 15:28:35 -- accel/accel.sh@19 -- # IFS=: 00:05:33.724 15:28:35 -- accel/accel.sh@19 -- # read -r var val 00:05:33.724 15:28:35 -- accel/accel.sh@20 -- # val=software 00:05:33.724 15:28:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.724 15:28:35 -- accel/accel.sh@22 -- # accel_module=software 00:05:33.724 15:28:35 -- accel/accel.sh@19 -- # IFS=: 00:05:33.724 15:28:35 -- accel/accel.sh@19 -- # read -r var val 00:05:33.724 15:28:35 -- accel/accel.sh@20 -- # val=32 00:05:33.724 15:28:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.724 15:28:35 -- accel/accel.sh@19 -- # IFS=: 00:05:33.724 15:28:35 -- accel/accel.sh@19 -- # read -r var val 00:05:33.724 15:28:35 -- accel/accel.sh@20 -- # val=32 00:05:33.724 15:28:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.724 15:28:35 -- accel/accel.sh@19 -- # IFS=: 00:05:33.724 15:28:35 -- accel/accel.sh@19 -- # read -r var val 00:05:33.724 15:28:35 -- accel/accel.sh@20 -- # val=1 00:05:33.724 15:28:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.724 15:28:35 -- accel/accel.sh@19 -- # IFS=: 00:05:33.724 15:28:35 -- accel/accel.sh@19 -- # read -r var val 00:05:33.724 15:28:35 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:33.725 15:28:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.725 15:28:35 -- accel/accel.sh@19 -- # IFS=: 00:05:33.725 15:28:35 -- accel/accel.sh@19 -- # read -r var val 00:05:33.725 15:28:35 -- accel/accel.sh@20 -- # val=Yes 00:05:33.725 15:28:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.725 15:28:35 -- accel/accel.sh@19 -- # IFS=: 00:05:33.725 15:28:35 -- accel/accel.sh@19 -- # read -r var val 00:05:33.725 15:28:35 -- accel/accel.sh@20 -- # val= 00:05:33.725 15:28:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.725 15:28:35 -- accel/accel.sh@19 -- # IFS=: 00:05:33.725 15:28:35 -- accel/accel.sh@19 -- # read -r var val 00:05:33.725 15:28:35 -- accel/accel.sh@20 -- # val= 00:05:33.725 15:28:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.725 15:28:35 -- accel/accel.sh@19 -- # IFS=: 00:05:33.725 15:28:35 -- accel/accel.sh@19 -- # read -r var val 00:05:35.100 15:28:36 -- accel/accel.sh@20 -- # val= 00:05:35.100 15:28:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.100 15:28:36 -- accel/accel.sh@19 -- # IFS=: 00:05:35.100 15:28:36 -- accel/accel.sh@19 -- # read -r var val 00:05:35.100 15:28:36 -- accel/accel.sh@20 -- # val= 00:05:35.100 15:28:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.100 15:28:36 -- accel/accel.sh@19 -- # IFS=: 00:05:35.100 15:28:36 -- accel/accel.sh@19 -- # read -r var val 00:05:35.100 15:28:36 -- accel/accel.sh@20 -- # val= 00:05:35.100 15:28:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.100 15:28:36 -- accel/accel.sh@19 -- # IFS=: 00:05:35.100 15:28:36 -- accel/accel.sh@19 -- # read -r var val 00:05:35.100 15:28:36 -- accel/accel.sh@20 -- # val= 00:05:35.100 15:28:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.100 15:28:36 -- accel/accel.sh@19 -- # IFS=: 00:05:35.100 15:28:36 -- accel/accel.sh@19 -- # read -r var val 00:05:35.100 15:28:36 -- accel/accel.sh@20 -- # val= 00:05:35.100 15:28:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.100 15:28:36 -- accel/accel.sh@19 -- # IFS=: 00:05:35.100 15:28:36 -- accel/accel.sh@19 -- # read -r var val 00:05:35.100 15:28:36 -- accel/accel.sh@20 -- # val= 00:05:35.100 15:28:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.100 15:28:36 -- accel/accel.sh@19 -- # IFS=: 00:05:35.100 15:28:36 -- accel/accel.sh@19 -- # read -r var val 00:05:35.100 15:28:36 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:35.100 15:28:36 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:35.100 15:28:36 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:35.100 00:05:35.100 real 0m1.645s 00:05:35.100 user 0m1.403s 00:05:35.100 sys 0m0.144s 00:05:35.100 15:28:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:35.100 15:28:36 -- common/autotest_common.sh@10 -- # set +x 00:05:35.100 ************************************ 00:05:35.100 END TEST accel_dualcast 00:05:35.100 ************************************ 00:05:35.100 15:28:36 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:35.100 15:28:36 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:35.100 15:28:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:35.100 15:28:36 -- common/autotest_common.sh@10 -- # set +x 00:05:35.100 ************************************ 00:05:35.100 START TEST accel_compare 00:05:35.100 ************************************ 00:05:35.100 15:28:36 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:05:35.100 15:28:36 -- accel/accel.sh@16 -- # local accel_opc 00:05:35.100 15:28:36 -- accel/accel.sh@17 -- # local accel_module 00:05:35.100 15:28:36 -- accel/accel.sh@19 -- # IFS=: 00:05:35.100 15:28:36 -- accel/accel.sh@19 -- # read -r var val 00:05:35.100 15:28:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:35.100 15:28:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:35.100 15:28:36 -- accel/accel.sh@12 -- # build_accel_config 00:05:35.100 15:28:36 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:35.100 15:28:36 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:35.100 15:28:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:35.100 15:28:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:35.100 15:28:36 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:35.100 15:28:36 -- accel/accel.sh@40 -- # local IFS=, 00:05:35.100 15:28:36 -- accel/accel.sh@41 -- # jq -r . 00:05:35.358 [2024-04-17 15:28:36.550739] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:05:35.358 [2024-04-17 15:28:36.550831] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61193 ] 00:05:35.358 [2024-04-17 15:28:36.684041] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.617 [2024-04-17 15:28:36.819136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.617 15:28:36 -- accel/accel.sh@20 -- # val= 00:05:35.617 15:28:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.617 15:28:36 -- accel/accel.sh@19 -- # IFS=: 00:05:35.617 15:28:36 -- accel/accel.sh@19 -- # read -r var val 00:05:35.617 15:28:36 -- accel/accel.sh@20 -- # val= 00:05:35.617 15:28:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.617 15:28:36 -- accel/accel.sh@19 -- # IFS=: 00:05:35.617 15:28:36 -- accel/accel.sh@19 -- # read -r var val 00:05:35.617 15:28:36 -- accel/accel.sh@20 -- # val=0x1 00:05:35.617 15:28:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.617 15:28:36 -- accel/accel.sh@19 -- # IFS=: 00:05:35.618 15:28:36 -- accel/accel.sh@19 -- # read -r var val 00:05:35.618 15:28:36 -- accel/accel.sh@20 -- # val= 00:05:35.618 15:28:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.618 15:28:36 -- accel/accel.sh@19 -- # IFS=: 00:05:35.618 15:28:36 -- accel/accel.sh@19 -- # read -r var val 00:05:35.618 15:28:36 -- accel/accel.sh@20 -- # val= 00:05:35.618 15:28:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.618 15:28:36 -- accel/accel.sh@19 -- # IFS=: 00:05:35.618 15:28:36 -- accel/accel.sh@19 -- # read -r var val 00:05:35.618 15:28:36 -- accel/accel.sh@20 -- # val=compare 00:05:35.618 15:28:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.618 15:28:36 -- accel/accel.sh@23 -- # accel_opc=compare 00:05:35.618 15:28:36 -- accel/accel.sh@19 -- # IFS=: 00:05:35.618 15:28:36 -- accel/accel.sh@19 -- # read -r var val 00:05:35.618 15:28:36 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:35.618 15:28:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.618 15:28:36 -- accel/accel.sh@19 -- # IFS=: 00:05:35.618 15:28:36 -- accel/accel.sh@19 -- # read -r var val 00:05:35.618 15:28:36 -- accel/accel.sh@20 -- # val= 00:05:35.618 15:28:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.618 15:28:36 -- accel/accel.sh@19 -- # IFS=: 00:05:35.618 15:28:36 -- accel/accel.sh@19 -- # read -r var val 00:05:35.618 15:28:36 -- accel/accel.sh@20 -- # val=software 00:05:35.618 15:28:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.618 15:28:36 -- accel/accel.sh@22 -- # accel_module=software 00:05:35.618 15:28:36 -- accel/accel.sh@19 -- # IFS=: 00:05:35.618 15:28:36 -- accel/accel.sh@19 -- # read -r var val 00:05:35.618 15:28:36 -- accel/accel.sh@20 -- # val=32 00:05:35.618 15:28:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.618 15:28:36 -- accel/accel.sh@19 -- # IFS=: 00:05:35.618 15:28:36 -- accel/accel.sh@19 -- # read -r var val 00:05:35.618 15:28:36 -- accel/accel.sh@20 -- # val=32 00:05:35.618 15:28:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.618 15:28:36 -- accel/accel.sh@19 -- # IFS=: 00:05:35.618 15:28:36 -- accel/accel.sh@19 -- # read -r var val 00:05:35.618 15:28:36 -- accel/accel.sh@20 -- # val=1 00:05:35.618 15:28:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.618 15:28:36 -- accel/accel.sh@19 -- # IFS=: 00:05:35.618 15:28:36 -- accel/accel.sh@19 -- # read -r var val 00:05:35.618 15:28:36 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:35.618 15:28:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.618 15:28:36 -- accel/accel.sh@19 -- # IFS=: 00:05:35.618 15:28:36 -- accel/accel.sh@19 -- # read -r var val 00:05:35.618 15:28:36 -- accel/accel.sh@20 -- # val=Yes 00:05:35.618 15:28:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.618 15:28:36 -- accel/accel.sh@19 -- # IFS=: 00:05:35.618 15:28:36 -- accel/accel.sh@19 -- # read -r var val 00:05:35.618 15:28:36 -- accel/accel.sh@20 -- # val= 00:05:35.618 15:28:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.618 15:28:36 -- accel/accel.sh@19 -- # IFS=: 00:05:35.618 15:28:36 -- accel/accel.sh@19 -- # read -r var val 00:05:35.618 15:28:36 -- accel/accel.sh@20 -- # val= 00:05:35.618 15:28:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.618 15:28:36 -- accel/accel.sh@19 -- # IFS=: 00:05:35.618 15:28:36 -- accel/accel.sh@19 -- # read -r var val 00:05:36.993 15:28:38 -- accel/accel.sh@20 -- # val= 00:05:36.993 15:28:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.993 15:28:38 -- accel/accel.sh@19 -- # IFS=: 00:05:36.993 15:28:38 -- accel/accel.sh@19 -- # read -r var val 00:05:36.993 15:28:38 -- accel/accel.sh@20 -- # val= 00:05:36.993 15:28:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.993 15:28:38 -- accel/accel.sh@19 -- # IFS=: 00:05:36.993 15:28:38 -- accel/accel.sh@19 -- # read -r var val 00:05:36.993 15:28:38 -- accel/accel.sh@20 -- # val= 00:05:36.993 15:28:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.993 15:28:38 -- accel/accel.sh@19 -- # IFS=: 00:05:36.993 15:28:38 -- accel/accel.sh@19 -- # read -r var val 00:05:36.993 15:28:38 -- accel/accel.sh@20 -- # val= 00:05:36.993 15:28:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.993 15:28:38 -- accel/accel.sh@19 -- # IFS=: 00:05:36.993 15:28:38 -- accel/accel.sh@19 -- # read -r var val 00:05:36.993 15:28:38 -- accel/accel.sh@20 -- # val= 00:05:36.993 15:28:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.993 15:28:38 -- accel/accel.sh@19 -- # IFS=: 00:05:36.993 15:28:38 -- accel/accel.sh@19 -- # read -r var val 00:05:36.993 15:28:38 -- accel/accel.sh@20 -- # val= 00:05:36.993 15:28:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.993 15:28:38 -- accel/accel.sh@19 -- # IFS=: 00:05:36.993 15:28:38 -- accel/accel.sh@19 -- # read -r var val 00:05:36.993 15:28:38 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:36.993 15:28:38 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:36.993 15:28:38 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:36.993 00:05:36.993 real 0m1.645s 00:05:36.993 user 0m1.403s 00:05:36.993 sys 0m0.143s 00:05:36.993 15:28:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:36.993 ************************************ 00:05:36.993 END TEST accel_compare 00:05:36.993 ************************************ 00:05:36.993 15:28:38 -- common/autotest_common.sh@10 -- # set +x 00:05:36.993 15:28:38 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:36.993 15:28:38 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:36.993 15:28:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:36.993 15:28:38 -- common/autotest_common.sh@10 -- # set +x 00:05:36.993 ************************************ 00:05:36.993 START TEST accel_xor 00:05:36.993 ************************************ 00:05:36.993 15:28:38 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:05:36.993 15:28:38 -- accel/accel.sh@16 -- # local accel_opc 00:05:36.993 15:28:38 -- accel/accel.sh@17 -- # local accel_module 00:05:36.993 15:28:38 -- accel/accel.sh@19 -- # IFS=: 00:05:36.993 15:28:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:36.993 15:28:38 -- accel/accel.sh@19 -- # read -r var val 00:05:36.993 15:28:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:36.993 15:28:38 -- accel/accel.sh@12 -- # build_accel_config 00:05:36.993 15:28:38 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:36.993 15:28:38 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:36.993 15:28:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.993 15:28:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.993 15:28:38 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:36.993 15:28:38 -- accel/accel.sh@40 -- # local IFS=, 00:05:36.993 15:28:38 -- accel/accel.sh@41 -- # jq -r . 00:05:36.993 [2024-04-17 15:28:38.317733] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:05:36.993 [2024-04-17 15:28:38.317836] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61237 ] 00:05:37.251 [2024-04-17 15:28:38.452002] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.251 [2024-04-17 15:28:38.593136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.251 15:28:38 -- accel/accel.sh@20 -- # val= 00:05:37.251 15:28:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.251 15:28:38 -- accel/accel.sh@19 -- # IFS=: 00:05:37.251 15:28:38 -- accel/accel.sh@19 -- # read -r var val 00:05:37.251 15:28:38 -- accel/accel.sh@20 -- # val= 00:05:37.251 15:28:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.251 15:28:38 -- accel/accel.sh@19 -- # IFS=: 00:05:37.251 15:28:38 -- accel/accel.sh@19 -- # read -r var val 00:05:37.251 15:28:38 -- accel/accel.sh@20 -- # val=0x1 00:05:37.251 15:28:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.251 15:28:38 -- accel/accel.sh@19 -- # IFS=: 00:05:37.251 15:28:38 -- accel/accel.sh@19 -- # read -r var val 00:05:37.251 15:28:38 -- accel/accel.sh@20 -- # val= 00:05:37.251 15:28:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.251 15:28:38 -- accel/accel.sh@19 -- # IFS=: 00:05:37.251 15:28:38 -- accel/accel.sh@19 -- # read -r var val 00:05:37.252 15:28:38 -- accel/accel.sh@20 -- # val= 00:05:37.252 15:28:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.252 15:28:38 -- accel/accel.sh@19 -- # IFS=: 00:05:37.252 15:28:38 -- accel/accel.sh@19 -- # read -r var val 00:05:37.252 15:28:38 -- accel/accel.sh@20 -- # val=xor 00:05:37.252 15:28:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.252 15:28:38 -- accel/accel.sh@23 -- # accel_opc=xor 00:05:37.252 15:28:38 -- accel/accel.sh@19 -- # IFS=: 00:05:37.252 15:28:38 -- accel/accel.sh@19 -- # read -r var val 00:05:37.252 15:28:38 -- accel/accel.sh@20 -- # val=2 00:05:37.252 15:28:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.252 15:28:38 -- accel/accel.sh@19 -- # IFS=: 00:05:37.252 15:28:38 -- accel/accel.sh@19 -- # read -r var val 00:05:37.252 15:28:38 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:37.252 15:28:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.252 15:28:38 -- accel/accel.sh@19 -- # IFS=: 00:05:37.252 15:28:38 -- accel/accel.sh@19 -- # read -r var val 00:05:37.252 15:28:38 -- accel/accel.sh@20 -- # val= 00:05:37.252 15:28:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.252 15:28:38 -- accel/accel.sh@19 -- # IFS=: 00:05:37.252 15:28:38 -- accel/accel.sh@19 -- # read -r var val 00:05:37.252 15:28:38 -- accel/accel.sh@20 -- # val=software 00:05:37.252 15:28:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.252 15:28:38 -- accel/accel.sh@22 -- # accel_module=software 00:05:37.252 15:28:38 -- accel/accel.sh@19 -- # IFS=: 00:05:37.252 15:28:38 -- accel/accel.sh@19 -- # read -r var val 00:05:37.252 15:28:38 -- accel/accel.sh@20 -- # val=32 00:05:37.252 15:28:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.252 15:28:38 -- accel/accel.sh@19 -- # IFS=: 00:05:37.252 15:28:38 -- accel/accel.sh@19 -- # read -r var val 00:05:37.252 15:28:38 -- accel/accel.sh@20 -- # val=32 00:05:37.252 15:28:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.252 15:28:38 -- accel/accel.sh@19 -- # IFS=: 00:05:37.252 15:28:38 -- accel/accel.sh@19 -- # read -r var val 00:05:37.252 15:28:38 -- accel/accel.sh@20 -- # val=1 00:05:37.252 15:28:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.252 15:28:38 -- accel/accel.sh@19 -- # IFS=: 00:05:37.252 15:28:38 -- accel/accel.sh@19 -- # read -r var val 00:05:37.252 15:28:38 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:37.252 15:28:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.252 15:28:38 -- accel/accel.sh@19 -- # IFS=: 00:05:37.252 15:28:38 -- accel/accel.sh@19 -- # read -r var val 00:05:37.252 15:28:38 -- accel/accel.sh@20 -- # val=Yes 00:05:37.252 15:28:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.252 15:28:38 -- accel/accel.sh@19 -- # IFS=: 00:05:37.252 15:28:38 -- accel/accel.sh@19 -- # read -r var val 00:05:37.252 15:28:38 -- accel/accel.sh@20 -- # val= 00:05:37.252 15:28:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.252 15:28:38 -- accel/accel.sh@19 -- # IFS=: 00:05:37.252 15:28:38 -- accel/accel.sh@19 -- # read -r var val 00:05:37.252 15:28:38 -- accel/accel.sh@20 -- # val= 00:05:37.252 15:28:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.252 15:28:38 -- accel/accel.sh@19 -- # IFS=: 00:05:37.252 15:28:38 -- accel/accel.sh@19 -- # read -r var val 00:05:38.627 15:28:39 -- accel/accel.sh@20 -- # val= 00:05:38.627 15:28:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.627 15:28:39 -- accel/accel.sh@19 -- # IFS=: 00:05:38.627 15:28:39 -- accel/accel.sh@19 -- # read -r var val 00:05:38.627 15:28:39 -- accel/accel.sh@20 -- # val= 00:05:38.627 15:28:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.627 15:28:39 -- accel/accel.sh@19 -- # IFS=: 00:05:38.627 15:28:39 -- accel/accel.sh@19 -- # read -r var val 00:05:38.627 15:28:39 -- accel/accel.sh@20 -- # val= 00:05:38.627 15:28:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.627 15:28:39 -- accel/accel.sh@19 -- # IFS=: 00:05:38.627 15:28:39 -- accel/accel.sh@19 -- # read -r var val 00:05:38.627 15:28:39 -- accel/accel.sh@20 -- # val= 00:05:38.627 15:28:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.627 15:28:39 -- accel/accel.sh@19 -- # IFS=: 00:05:38.627 15:28:39 -- accel/accel.sh@19 -- # read -r var val 00:05:38.627 15:28:39 -- accel/accel.sh@20 -- # val= 00:05:38.627 15:28:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.627 15:28:39 -- accel/accel.sh@19 -- # IFS=: 00:05:38.627 15:28:39 -- accel/accel.sh@19 -- # read -r var val 00:05:38.627 15:28:39 -- accel/accel.sh@20 -- # val= 00:05:38.627 15:28:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.627 15:28:39 -- accel/accel.sh@19 -- # IFS=: 00:05:38.627 15:28:39 -- accel/accel.sh@19 -- # read -r var val 00:05:38.627 15:28:39 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:38.627 15:28:39 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:38.627 15:28:39 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:38.627 00:05:38.627 real 0m1.643s 00:05:38.627 user 0m1.404s 00:05:38.627 sys 0m0.145s 00:05:38.627 15:28:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:38.627 ************************************ 00:05:38.627 END TEST accel_xor 00:05:38.627 ************************************ 00:05:38.627 15:28:39 -- common/autotest_common.sh@10 -- # set +x 00:05:38.627 15:28:39 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:38.627 15:28:39 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:38.627 15:28:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.627 15:28:39 -- common/autotest_common.sh@10 -- # set +x 00:05:38.627 ************************************ 00:05:38.627 START TEST accel_xor 00:05:38.627 ************************************ 00:05:38.627 15:28:40 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:05:38.627 15:28:40 -- accel/accel.sh@16 -- # local accel_opc 00:05:38.627 15:28:40 -- accel/accel.sh@17 -- # local accel_module 00:05:38.627 15:28:40 -- accel/accel.sh@19 -- # IFS=: 00:05:38.627 15:28:40 -- accel/accel.sh@19 -- # read -r var val 00:05:38.627 15:28:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:38.627 15:28:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:38.627 15:28:40 -- accel/accel.sh@12 -- # build_accel_config 00:05:38.627 15:28:40 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:38.627 15:28:40 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:38.627 15:28:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.627 15:28:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.627 15:28:40 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:38.627 15:28:40 -- accel/accel.sh@40 -- # local IFS=, 00:05:38.627 15:28:40 -- accel/accel.sh@41 -- # jq -r . 00:05:38.885 [2024-04-17 15:28:40.082790] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:05:38.885 [2024-04-17 15:28:40.082891] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61281 ] 00:05:38.885 [2024-04-17 15:28:40.220154] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.142 [2024-04-17 15:28:40.356616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.142 15:28:40 -- accel/accel.sh@20 -- # val= 00:05:39.142 15:28:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.142 15:28:40 -- accel/accel.sh@19 -- # IFS=: 00:05:39.142 15:28:40 -- accel/accel.sh@19 -- # read -r var val 00:05:39.142 15:28:40 -- accel/accel.sh@20 -- # val= 00:05:39.142 15:28:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.142 15:28:40 -- accel/accel.sh@19 -- # IFS=: 00:05:39.142 15:28:40 -- accel/accel.sh@19 -- # read -r var val 00:05:39.142 15:28:40 -- accel/accel.sh@20 -- # val=0x1 00:05:39.142 15:28:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.142 15:28:40 -- accel/accel.sh@19 -- # IFS=: 00:05:39.142 15:28:40 -- accel/accel.sh@19 -- # read -r var val 00:05:39.142 15:28:40 -- accel/accel.sh@20 -- # val= 00:05:39.142 15:28:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.142 15:28:40 -- accel/accel.sh@19 -- # IFS=: 00:05:39.142 15:28:40 -- accel/accel.sh@19 -- # read -r var val 00:05:39.142 15:28:40 -- accel/accel.sh@20 -- # val= 00:05:39.142 15:28:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.142 15:28:40 -- accel/accel.sh@19 -- # IFS=: 00:05:39.142 15:28:40 -- accel/accel.sh@19 -- # read -r var val 00:05:39.142 15:28:40 -- accel/accel.sh@20 -- # val=xor 00:05:39.142 15:28:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.142 15:28:40 -- accel/accel.sh@23 -- # accel_opc=xor 00:05:39.142 15:28:40 -- accel/accel.sh@19 -- # IFS=: 00:05:39.142 15:28:40 -- accel/accel.sh@19 -- # read -r var val 00:05:39.142 15:28:40 -- accel/accel.sh@20 -- # val=3 00:05:39.142 15:28:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.142 15:28:40 -- accel/accel.sh@19 -- # IFS=: 00:05:39.142 15:28:40 -- accel/accel.sh@19 -- # read -r var val 00:05:39.142 15:28:40 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:39.142 15:28:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.142 15:28:40 -- accel/accel.sh@19 -- # IFS=: 00:05:39.142 15:28:40 -- accel/accel.sh@19 -- # read -r var val 00:05:39.142 15:28:40 -- accel/accel.sh@20 -- # val= 00:05:39.142 15:28:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.142 15:28:40 -- accel/accel.sh@19 -- # IFS=: 00:05:39.142 15:28:40 -- accel/accel.sh@19 -- # read -r var val 00:05:39.142 15:28:40 -- accel/accel.sh@20 -- # val=software 00:05:39.142 15:28:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.142 15:28:40 -- accel/accel.sh@22 -- # accel_module=software 00:05:39.142 15:28:40 -- accel/accel.sh@19 -- # IFS=: 00:05:39.142 15:28:40 -- accel/accel.sh@19 -- # read -r var val 00:05:39.142 15:28:40 -- accel/accel.sh@20 -- # val=32 00:05:39.142 15:28:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.142 15:28:40 -- accel/accel.sh@19 -- # IFS=: 00:05:39.142 15:28:40 -- accel/accel.sh@19 -- # read -r var val 00:05:39.142 15:28:40 -- accel/accel.sh@20 -- # val=32 00:05:39.142 15:28:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.142 15:28:40 -- accel/accel.sh@19 -- # IFS=: 00:05:39.142 15:28:40 -- accel/accel.sh@19 -- # read -r var val 00:05:39.142 15:28:40 -- accel/accel.sh@20 -- # val=1 00:05:39.142 15:28:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.142 15:28:40 -- accel/accel.sh@19 -- # IFS=: 00:05:39.142 15:28:40 -- accel/accel.sh@19 -- # read -r var val 00:05:39.142 15:28:40 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:39.142 15:28:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.142 15:28:40 -- accel/accel.sh@19 -- # IFS=: 00:05:39.142 15:28:40 -- accel/accel.sh@19 -- # read -r var val 00:05:39.142 15:28:40 -- accel/accel.sh@20 -- # val=Yes 00:05:39.142 15:28:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.142 15:28:40 -- accel/accel.sh@19 -- # IFS=: 00:05:39.142 15:28:40 -- accel/accel.sh@19 -- # read -r var val 00:05:39.142 15:28:40 -- accel/accel.sh@20 -- # val= 00:05:39.142 15:28:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.142 15:28:40 -- accel/accel.sh@19 -- # IFS=: 00:05:39.142 15:28:40 -- accel/accel.sh@19 -- # read -r var val 00:05:39.142 15:28:40 -- accel/accel.sh@20 -- # val= 00:05:39.142 15:28:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.142 15:28:40 -- accel/accel.sh@19 -- # IFS=: 00:05:39.142 15:28:40 -- accel/accel.sh@19 -- # read -r var val 00:05:40.515 15:28:41 -- accel/accel.sh@20 -- # val= 00:05:40.515 15:28:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.515 15:28:41 -- accel/accel.sh@19 -- # IFS=: 00:05:40.515 15:28:41 -- accel/accel.sh@19 -- # read -r var val 00:05:40.515 15:28:41 -- accel/accel.sh@20 -- # val= 00:05:40.515 15:28:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.515 15:28:41 -- accel/accel.sh@19 -- # IFS=: 00:05:40.515 15:28:41 -- accel/accel.sh@19 -- # read -r var val 00:05:40.515 15:28:41 -- accel/accel.sh@20 -- # val= 00:05:40.515 15:28:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.515 15:28:41 -- accel/accel.sh@19 -- # IFS=: 00:05:40.515 15:28:41 -- accel/accel.sh@19 -- # read -r var val 00:05:40.515 15:28:41 -- accel/accel.sh@20 -- # val= 00:05:40.515 15:28:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.515 15:28:41 -- accel/accel.sh@19 -- # IFS=: 00:05:40.515 15:28:41 -- accel/accel.sh@19 -- # read -r var val 00:05:40.515 15:28:41 -- accel/accel.sh@20 -- # val= 00:05:40.515 15:28:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.515 15:28:41 -- accel/accel.sh@19 -- # IFS=: 00:05:40.515 15:28:41 -- accel/accel.sh@19 -- # read -r var val 00:05:40.515 15:28:41 -- accel/accel.sh@20 -- # val= 00:05:40.515 15:28:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.515 15:28:41 -- accel/accel.sh@19 -- # IFS=: 00:05:40.515 15:28:41 -- accel/accel.sh@19 -- # read -r var val 00:05:40.515 15:28:41 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:40.515 15:28:41 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:40.515 15:28:41 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:40.515 00:05:40.515 real 0m1.644s 00:05:40.515 user 0m1.411s 00:05:40.515 sys 0m0.138s 00:05:40.515 15:28:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:40.515 15:28:41 -- common/autotest_common.sh@10 -- # set +x 00:05:40.515 ************************************ 00:05:40.515 END TEST accel_xor 00:05:40.515 ************************************ 00:05:40.515 15:28:41 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:40.515 15:28:41 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:40.515 15:28:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:40.515 15:28:41 -- common/autotest_common.sh@10 -- # set +x 00:05:40.515 ************************************ 00:05:40.515 START TEST accel_dif_verify 00:05:40.515 ************************************ 00:05:40.515 15:28:41 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:05:40.515 15:28:41 -- accel/accel.sh@16 -- # local accel_opc 00:05:40.515 15:28:41 -- accel/accel.sh@17 -- # local accel_module 00:05:40.515 15:28:41 -- accel/accel.sh@19 -- # IFS=: 00:05:40.515 15:28:41 -- accel/accel.sh@19 -- # read -r var val 00:05:40.515 15:28:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:40.515 15:28:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:40.515 15:28:41 -- accel/accel.sh@12 -- # build_accel_config 00:05:40.515 15:28:41 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:40.515 15:28:41 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:40.515 15:28:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.515 15:28:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.515 15:28:41 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:40.515 15:28:41 -- accel/accel.sh@40 -- # local IFS=, 00:05:40.515 15:28:41 -- accel/accel.sh@41 -- # jq -r . 00:05:40.515 [2024-04-17 15:28:41.847650] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:05:40.515 [2024-04-17 15:28:41.847774] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61314 ] 00:05:40.774 [2024-04-17 15:28:41.980576] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.774 [2024-04-17 15:28:42.129235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.774 15:28:42 -- accel/accel.sh@20 -- # val= 00:05:40.774 15:28:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.774 15:28:42 -- accel/accel.sh@19 -- # IFS=: 00:05:40.774 15:28:42 -- accel/accel.sh@19 -- # read -r var val 00:05:40.774 15:28:42 -- accel/accel.sh@20 -- # val= 00:05:40.774 15:28:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.774 15:28:42 -- accel/accel.sh@19 -- # IFS=: 00:05:40.774 15:28:42 -- accel/accel.sh@19 -- # read -r var val 00:05:40.774 15:28:42 -- accel/accel.sh@20 -- # val=0x1 00:05:40.774 15:28:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.774 15:28:42 -- accel/accel.sh@19 -- # IFS=: 00:05:40.774 15:28:42 -- accel/accel.sh@19 -- # read -r var val 00:05:40.774 15:28:42 -- accel/accel.sh@20 -- # val= 00:05:40.774 15:28:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.774 15:28:42 -- accel/accel.sh@19 -- # IFS=: 00:05:40.774 15:28:42 -- accel/accel.sh@19 -- # read -r var val 00:05:40.774 15:28:42 -- accel/accel.sh@20 -- # val= 00:05:40.774 15:28:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.774 15:28:42 -- accel/accel.sh@19 -- # IFS=: 00:05:40.774 15:28:42 -- accel/accel.sh@19 -- # read -r var val 00:05:40.774 15:28:42 -- accel/accel.sh@20 -- # val=dif_verify 00:05:40.774 15:28:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.774 15:28:42 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:40.774 15:28:42 -- accel/accel.sh@19 -- # IFS=: 00:05:40.774 15:28:42 -- accel/accel.sh@19 -- # read -r var val 00:05:40.774 15:28:42 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:40.774 15:28:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.774 15:28:42 -- accel/accel.sh@19 -- # IFS=: 00:05:40.774 15:28:42 -- accel/accel.sh@19 -- # read -r var val 00:05:40.774 15:28:42 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:40.774 15:28:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.774 15:28:42 -- accel/accel.sh@19 -- # IFS=: 00:05:40.774 15:28:42 -- accel/accel.sh@19 -- # read -r var val 00:05:40.774 15:28:42 -- accel/accel.sh@20 -- # val='512 bytes' 00:05:40.774 15:28:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.774 15:28:42 -- accel/accel.sh@19 -- # IFS=: 00:05:41.031 15:28:42 -- accel/accel.sh@19 -- # read -r var val 00:05:41.031 15:28:42 -- accel/accel.sh@20 -- # val='8 bytes' 00:05:41.031 15:28:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.031 15:28:42 -- accel/accel.sh@19 -- # IFS=: 00:05:41.031 15:28:42 -- accel/accel.sh@19 -- # read -r var val 00:05:41.031 15:28:42 -- accel/accel.sh@20 -- # val= 00:05:41.031 15:28:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.031 15:28:42 -- accel/accel.sh@19 -- # IFS=: 00:05:41.031 15:28:42 -- accel/accel.sh@19 -- # read -r var val 00:05:41.031 15:28:42 -- accel/accel.sh@20 -- # val=software 00:05:41.031 15:28:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.031 15:28:42 -- accel/accel.sh@22 -- # accel_module=software 00:05:41.031 15:28:42 -- accel/accel.sh@19 -- # IFS=: 00:05:41.031 15:28:42 -- accel/accel.sh@19 -- # read -r var val 00:05:41.031 15:28:42 -- accel/accel.sh@20 -- # val=32 00:05:41.031 15:28:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.031 15:28:42 -- accel/accel.sh@19 -- # IFS=: 00:05:41.031 15:28:42 -- accel/accel.sh@19 -- # read -r var val 00:05:41.031 15:28:42 -- accel/accel.sh@20 -- # val=32 00:05:41.031 15:28:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.031 15:28:42 -- accel/accel.sh@19 -- # IFS=: 00:05:41.031 15:28:42 -- accel/accel.sh@19 -- # read -r var val 00:05:41.031 15:28:42 -- accel/accel.sh@20 -- # val=1 00:05:41.031 15:28:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.031 15:28:42 -- accel/accel.sh@19 -- # IFS=: 00:05:41.031 15:28:42 -- accel/accel.sh@19 -- # read -r var val 00:05:41.031 15:28:42 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:41.031 15:28:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.031 15:28:42 -- accel/accel.sh@19 -- # IFS=: 00:05:41.031 15:28:42 -- accel/accel.sh@19 -- # read -r var val 00:05:41.031 15:28:42 -- accel/accel.sh@20 -- # val=No 00:05:41.031 15:28:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.031 15:28:42 -- accel/accel.sh@19 -- # IFS=: 00:05:41.032 15:28:42 -- accel/accel.sh@19 -- # read -r var val 00:05:41.032 15:28:42 -- accel/accel.sh@20 -- # val= 00:05:41.032 15:28:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.032 15:28:42 -- accel/accel.sh@19 -- # IFS=: 00:05:41.032 15:28:42 -- accel/accel.sh@19 -- # read -r var val 00:05:41.032 15:28:42 -- accel/accel.sh@20 -- # val= 00:05:41.032 15:28:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.032 15:28:42 -- accel/accel.sh@19 -- # IFS=: 00:05:41.032 15:28:42 -- accel/accel.sh@19 -- # read -r var val 00:05:42.403 15:28:43 -- accel/accel.sh@20 -- # val= 00:05:42.403 15:28:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.403 15:28:43 -- accel/accel.sh@19 -- # IFS=: 00:05:42.403 15:28:43 -- accel/accel.sh@19 -- # read -r var val 00:05:42.403 15:28:43 -- accel/accel.sh@20 -- # val= 00:05:42.403 15:28:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.403 15:28:43 -- accel/accel.sh@19 -- # IFS=: 00:05:42.403 15:28:43 -- accel/accel.sh@19 -- # read -r var val 00:05:42.403 15:28:43 -- accel/accel.sh@20 -- # val= 00:05:42.403 15:28:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.403 15:28:43 -- accel/accel.sh@19 -- # IFS=: 00:05:42.403 15:28:43 -- accel/accel.sh@19 -- # read -r var val 00:05:42.403 15:28:43 -- accel/accel.sh@20 -- # val= 00:05:42.403 15:28:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.404 15:28:43 -- accel/accel.sh@19 -- # IFS=: 00:05:42.404 15:28:43 -- accel/accel.sh@19 -- # read -r var val 00:05:42.404 15:28:43 -- accel/accel.sh@20 -- # val= 00:05:42.404 15:28:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.404 15:28:43 -- accel/accel.sh@19 -- # IFS=: 00:05:42.404 15:28:43 -- accel/accel.sh@19 -- # read -r var val 00:05:42.404 15:28:43 -- accel/accel.sh@20 -- # val= 00:05:42.404 15:28:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.404 15:28:43 -- accel/accel.sh@19 -- # IFS=: 00:05:42.404 15:28:43 -- accel/accel.sh@19 -- # read -r var val 00:05:42.404 15:28:43 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:42.404 15:28:43 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:42.404 15:28:43 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:42.404 00:05:42.404 real 0m1.638s 00:05:42.404 user 0m1.390s 00:05:42.404 sys 0m0.155s 00:05:42.404 15:28:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:42.404 15:28:43 -- common/autotest_common.sh@10 -- # set +x 00:05:42.404 ************************************ 00:05:42.404 END TEST accel_dif_verify 00:05:42.404 ************************************ 00:05:42.404 15:28:43 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:42.404 15:28:43 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:42.404 15:28:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:42.404 15:28:43 -- common/autotest_common.sh@10 -- # set +x 00:05:42.404 ************************************ 00:05:42.404 START TEST accel_dif_generate 00:05:42.404 ************************************ 00:05:42.404 15:28:43 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:05:42.404 15:28:43 -- accel/accel.sh@16 -- # local accel_opc 00:05:42.404 15:28:43 -- accel/accel.sh@17 -- # local accel_module 00:05:42.404 15:28:43 -- accel/accel.sh@19 -- # IFS=: 00:05:42.404 15:28:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:42.404 15:28:43 -- accel/accel.sh@19 -- # read -r var val 00:05:42.404 15:28:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:42.404 15:28:43 -- accel/accel.sh@12 -- # build_accel_config 00:05:42.404 15:28:43 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:42.404 15:28:43 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:42.404 15:28:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.404 15:28:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.404 15:28:43 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:42.404 15:28:43 -- accel/accel.sh@40 -- # local IFS=, 00:05:42.404 15:28:43 -- accel/accel.sh@41 -- # jq -r . 00:05:42.404 [2024-04-17 15:28:43.605794] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:05:42.404 [2024-04-17 15:28:43.605876] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61358 ] 00:05:42.404 [2024-04-17 15:28:43.744266] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.662 [2024-04-17 15:28:43.885008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.662 15:28:43 -- accel/accel.sh@20 -- # val= 00:05:42.662 15:28:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.662 15:28:43 -- accel/accel.sh@19 -- # IFS=: 00:05:42.662 15:28:43 -- accel/accel.sh@19 -- # read -r var val 00:05:42.662 15:28:43 -- accel/accel.sh@20 -- # val= 00:05:42.662 15:28:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.662 15:28:43 -- accel/accel.sh@19 -- # IFS=: 00:05:42.662 15:28:43 -- accel/accel.sh@19 -- # read -r var val 00:05:42.662 15:28:43 -- accel/accel.sh@20 -- # val=0x1 00:05:42.662 15:28:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.662 15:28:43 -- accel/accel.sh@19 -- # IFS=: 00:05:42.662 15:28:43 -- accel/accel.sh@19 -- # read -r var val 00:05:42.662 15:28:43 -- accel/accel.sh@20 -- # val= 00:05:42.662 15:28:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.662 15:28:43 -- accel/accel.sh@19 -- # IFS=: 00:05:42.662 15:28:43 -- accel/accel.sh@19 -- # read -r var val 00:05:42.662 15:28:43 -- accel/accel.sh@20 -- # val= 00:05:42.662 15:28:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.662 15:28:43 -- accel/accel.sh@19 -- # IFS=: 00:05:42.662 15:28:43 -- accel/accel.sh@19 -- # read -r var val 00:05:42.662 15:28:43 -- accel/accel.sh@20 -- # val=dif_generate 00:05:42.662 15:28:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.662 15:28:43 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:42.662 15:28:43 -- accel/accel.sh@19 -- # IFS=: 00:05:42.662 15:28:43 -- accel/accel.sh@19 -- # read -r var val 00:05:42.662 15:28:43 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:42.662 15:28:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.662 15:28:43 -- accel/accel.sh@19 -- # IFS=: 00:05:42.662 15:28:43 -- accel/accel.sh@19 -- # read -r var val 00:05:42.662 15:28:43 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:42.662 15:28:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.662 15:28:43 -- accel/accel.sh@19 -- # IFS=: 00:05:42.662 15:28:43 -- accel/accel.sh@19 -- # read -r var val 00:05:42.662 15:28:43 -- accel/accel.sh@20 -- # val='512 bytes' 00:05:42.662 15:28:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.662 15:28:43 -- accel/accel.sh@19 -- # IFS=: 00:05:42.662 15:28:43 -- accel/accel.sh@19 -- # read -r var val 00:05:42.662 15:28:43 -- accel/accel.sh@20 -- # val='8 bytes' 00:05:42.662 15:28:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.662 15:28:43 -- accel/accel.sh@19 -- # IFS=: 00:05:42.662 15:28:43 -- accel/accel.sh@19 -- # read -r var val 00:05:42.662 15:28:43 -- accel/accel.sh@20 -- # val= 00:05:42.662 15:28:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.662 15:28:43 -- accel/accel.sh@19 -- # IFS=: 00:05:42.662 15:28:43 -- accel/accel.sh@19 -- # read -r var val 00:05:42.662 15:28:43 -- accel/accel.sh@20 -- # val=software 00:05:42.662 15:28:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.662 15:28:43 -- accel/accel.sh@22 -- # accel_module=software 00:05:42.662 15:28:43 -- accel/accel.sh@19 -- # IFS=: 00:05:42.662 15:28:43 -- accel/accel.sh@19 -- # read -r var val 00:05:42.662 15:28:43 -- accel/accel.sh@20 -- # val=32 00:05:42.662 15:28:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.662 15:28:43 -- accel/accel.sh@19 -- # IFS=: 00:05:42.662 15:28:43 -- accel/accel.sh@19 -- # read -r var val 00:05:42.662 15:28:43 -- accel/accel.sh@20 -- # val=32 00:05:42.662 15:28:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.662 15:28:43 -- accel/accel.sh@19 -- # IFS=: 00:05:42.662 15:28:43 -- accel/accel.sh@19 -- # read -r var val 00:05:42.662 15:28:43 -- accel/accel.sh@20 -- # val=1 00:05:42.663 15:28:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.663 15:28:43 -- accel/accel.sh@19 -- # IFS=: 00:05:42.663 15:28:43 -- accel/accel.sh@19 -- # read -r var val 00:05:42.663 15:28:43 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:42.663 15:28:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.663 15:28:43 -- accel/accel.sh@19 -- # IFS=: 00:05:42.663 15:28:43 -- accel/accel.sh@19 -- # read -r var val 00:05:42.663 15:28:43 -- accel/accel.sh@20 -- # val=No 00:05:42.663 15:28:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.663 15:28:43 -- accel/accel.sh@19 -- # IFS=: 00:05:42.663 15:28:43 -- accel/accel.sh@19 -- # read -r var val 00:05:42.663 15:28:43 -- accel/accel.sh@20 -- # val= 00:05:42.663 15:28:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.663 15:28:43 -- accel/accel.sh@19 -- # IFS=: 00:05:42.663 15:28:43 -- accel/accel.sh@19 -- # read -r var val 00:05:42.663 15:28:43 -- accel/accel.sh@20 -- # val= 00:05:42.663 15:28:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.663 15:28:43 -- accel/accel.sh@19 -- # IFS=: 00:05:42.663 15:28:43 -- accel/accel.sh@19 -- # read -r var val 00:05:44.111 15:28:45 -- accel/accel.sh@20 -- # val= 00:05:44.111 15:28:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.111 15:28:45 -- accel/accel.sh@19 -- # IFS=: 00:05:44.111 15:28:45 -- accel/accel.sh@19 -- # read -r var val 00:05:44.111 15:28:45 -- accel/accel.sh@20 -- # val= 00:05:44.111 15:28:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.111 15:28:45 -- accel/accel.sh@19 -- # IFS=: 00:05:44.111 15:28:45 -- accel/accel.sh@19 -- # read -r var val 00:05:44.111 15:28:45 -- accel/accel.sh@20 -- # val= 00:05:44.111 15:28:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.111 15:28:45 -- accel/accel.sh@19 -- # IFS=: 00:05:44.111 15:28:45 -- accel/accel.sh@19 -- # read -r var val 00:05:44.111 15:28:45 -- accel/accel.sh@20 -- # val= 00:05:44.111 15:28:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.111 15:28:45 -- accel/accel.sh@19 -- # IFS=: 00:05:44.111 15:28:45 -- accel/accel.sh@19 -- # read -r var val 00:05:44.111 15:28:45 -- accel/accel.sh@20 -- # val= 00:05:44.111 15:28:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.111 15:28:45 -- accel/accel.sh@19 -- # IFS=: 00:05:44.111 15:28:45 -- accel/accel.sh@19 -- # read -r var val 00:05:44.111 15:28:45 -- accel/accel.sh@20 -- # val= 00:05:44.111 15:28:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.111 15:28:45 -- accel/accel.sh@19 -- # IFS=: 00:05:44.111 15:28:45 -- accel/accel.sh@19 -- # read -r var val 00:05:44.111 15:28:45 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:44.111 15:28:45 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:44.111 15:28:45 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:44.111 00:05:44.111 real 0m1.677s 00:05:44.111 user 0m1.431s 00:05:44.111 sys 0m0.150s 00:05:44.111 15:28:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:44.111 15:28:45 -- common/autotest_common.sh@10 -- # set +x 00:05:44.111 ************************************ 00:05:44.111 END TEST accel_dif_generate 00:05:44.111 ************************************ 00:05:44.111 15:28:45 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:44.111 15:28:45 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:44.111 15:28:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:44.111 15:28:45 -- common/autotest_common.sh@10 -- # set +x 00:05:44.111 ************************************ 00:05:44.111 START TEST accel_dif_generate_copy 00:05:44.111 ************************************ 00:05:44.111 15:28:45 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:05:44.111 15:28:45 -- accel/accel.sh@16 -- # local accel_opc 00:05:44.111 15:28:45 -- accel/accel.sh@17 -- # local accel_module 00:05:44.111 15:28:45 -- accel/accel.sh@19 -- # IFS=: 00:05:44.111 15:28:45 -- accel/accel.sh@19 -- # read -r var val 00:05:44.111 15:28:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:44.111 15:28:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:44.111 15:28:45 -- accel/accel.sh@12 -- # build_accel_config 00:05:44.111 15:28:45 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:44.111 15:28:45 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:44.111 15:28:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.111 15:28:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.111 15:28:45 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:44.111 15:28:45 -- accel/accel.sh@40 -- # local IFS=, 00:05:44.111 15:28:45 -- accel/accel.sh@41 -- # jq -r . 00:05:44.111 [2024-04-17 15:28:45.412107] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:05:44.111 [2024-04-17 15:28:45.412226] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61402 ] 00:05:44.369 [2024-04-17 15:28:45.557206] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.369 [2024-04-17 15:28:45.731237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.627 15:28:45 -- accel/accel.sh@20 -- # val= 00:05:44.627 15:28:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.627 15:28:45 -- accel/accel.sh@19 -- # IFS=: 00:05:44.627 15:28:45 -- accel/accel.sh@19 -- # read -r var val 00:05:44.627 15:28:45 -- accel/accel.sh@20 -- # val= 00:05:44.627 15:28:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.627 15:28:45 -- accel/accel.sh@19 -- # IFS=: 00:05:44.627 15:28:45 -- accel/accel.sh@19 -- # read -r var val 00:05:44.627 15:28:45 -- accel/accel.sh@20 -- # val=0x1 00:05:44.627 15:28:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.627 15:28:45 -- accel/accel.sh@19 -- # IFS=: 00:05:44.627 15:28:45 -- accel/accel.sh@19 -- # read -r var val 00:05:44.627 15:28:45 -- accel/accel.sh@20 -- # val= 00:05:44.627 15:28:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.627 15:28:45 -- accel/accel.sh@19 -- # IFS=: 00:05:44.627 15:28:45 -- accel/accel.sh@19 -- # read -r var val 00:05:44.627 15:28:45 -- accel/accel.sh@20 -- # val= 00:05:44.627 15:28:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.627 15:28:45 -- accel/accel.sh@19 -- # IFS=: 00:05:44.627 15:28:45 -- accel/accel.sh@19 -- # read -r var val 00:05:44.627 15:28:45 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:44.627 15:28:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.627 15:28:45 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:44.627 15:28:45 -- accel/accel.sh@19 -- # IFS=: 00:05:44.627 15:28:45 -- accel/accel.sh@19 -- # read -r var val 00:05:44.627 15:28:45 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:44.627 15:28:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.627 15:28:45 -- accel/accel.sh@19 -- # IFS=: 00:05:44.627 15:28:45 -- accel/accel.sh@19 -- # read -r var val 00:05:44.627 15:28:45 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:44.627 15:28:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.627 15:28:45 -- accel/accel.sh@19 -- # IFS=: 00:05:44.627 15:28:45 -- accel/accel.sh@19 -- # read -r var val 00:05:44.627 15:28:45 -- accel/accel.sh@20 -- # val= 00:05:44.627 15:28:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.627 15:28:45 -- accel/accel.sh@19 -- # IFS=: 00:05:44.627 15:28:45 -- accel/accel.sh@19 -- # read -r var val 00:05:44.627 15:28:45 -- accel/accel.sh@20 -- # val=software 00:05:44.627 15:28:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.627 15:28:45 -- accel/accel.sh@22 -- # accel_module=software 00:05:44.627 15:28:45 -- accel/accel.sh@19 -- # IFS=: 00:05:44.627 15:28:45 -- accel/accel.sh@19 -- # read -r var val 00:05:44.627 15:28:45 -- accel/accel.sh@20 -- # val=32 00:05:44.627 15:28:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.627 15:28:45 -- accel/accel.sh@19 -- # IFS=: 00:05:44.627 15:28:45 -- accel/accel.sh@19 -- # read -r var val 00:05:44.627 15:28:45 -- accel/accel.sh@20 -- # val=32 00:05:44.627 15:28:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.627 15:28:45 -- accel/accel.sh@19 -- # IFS=: 00:05:44.627 15:28:45 -- accel/accel.sh@19 -- # read -r var val 00:05:44.627 15:28:45 -- accel/accel.sh@20 -- # val=1 00:05:44.627 15:28:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.627 15:28:45 -- accel/accel.sh@19 -- # IFS=: 00:05:44.627 15:28:45 -- accel/accel.sh@19 -- # read -r var val 00:05:44.627 15:28:45 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:44.627 15:28:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.627 15:28:45 -- accel/accel.sh@19 -- # IFS=: 00:05:44.627 15:28:45 -- accel/accel.sh@19 -- # read -r var val 00:05:44.627 15:28:45 -- accel/accel.sh@20 -- # val=No 00:05:44.627 15:28:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.627 15:28:45 -- accel/accel.sh@19 -- # IFS=: 00:05:44.627 15:28:45 -- accel/accel.sh@19 -- # read -r var val 00:05:44.627 15:28:45 -- accel/accel.sh@20 -- # val= 00:05:44.627 15:28:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.627 15:28:45 -- accel/accel.sh@19 -- # IFS=: 00:05:44.627 15:28:45 -- accel/accel.sh@19 -- # read -r var val 00:05:44.627 15:28:45 -- accel/accel.sh@20 -- # val= 00:05:44.627 15:28:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.627 15:28:45 -- accel/accel.sh@19 -- # IFS=: 00:05:44.627 15:28:45 -- accel/accel.sh@19 -- # read -r var val 00:05:46.001 15:28:47 -- accel/accel.sh@20 -- # val= 00:05:46.001 15:28:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.001 15:28:47 -- accel/accel.sh@19 -- # IFS=: 00:05:46.001 15:28:47 -- accel/accel.sh@19 -- # read -r var val 00:05:46.001 15:28:47 -- accel/accel.sh@20 -- # val= 00:05:46.001 15:28:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.001 15:28:47 -- accel/accel.sh@19 -- # IFS=: 00:05:46.001 15:28:47 -- accel/accel.sh@19 -- # read -r var val 00:05:46.001 15:28:47 -- accel/accel.sh@20 -- # val= 00:05:46.001 15:28:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.001 15:28:47 -- accel/accel.sh@19 -- # IFS=: 00:05:46.001 15:28:47 -- accel/accel.sh@19 -- # read -r var val 00:05:46.001 15:28:47 -- accel/accel.sh@20 -- # val= 00:05:46.001 15:28:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.001 15:28:47 -- accel/accel.sh@19 -- # IFS=: 00:05:46.001 15:28:47 -- accel/accel.sh@19 -- # read -r var val 00:05:46.001 15:28:47 -- accel/accel.sh@20 -- # val= 00:05:46.001 15:28:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.001 15:28:47 -- accel/accel.sh@19 -- # IFS=: 00:05:46.001 15:28:47 -- accel/accel.sh@19 -- # read -r var val 00:05:46.001 15:28:47 -- accel/accel.sh@20 -- # val= 00:05:46.001 15:28:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.001 15:28:47 -- accel/accel.sh@19 -- # IFS=: 00:05:46.001 15:28:47 -- accel/accel.sh@19 -- # read -r var val 00:05:46.001 15:28:47 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:46.001 15:28:47 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:46.001 15:28:47 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:46.001 00:05:46.001 real 0m1.695s 00:05:46.001 user 0m1.443s 00:05:46.001 sys 0m0.153s 00:05:46.001 15:28:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:46.001 15:28:47 -- common/autotest_common.sh@10 -- # set +x 00:05:46.001 ************************************ 00:05:46.001 END TEST accel_dif_generate_copy 00:05:46.001 ************************************ 00:05:46.001 15:28:47 -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:46.001 15:28:47 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:46.001 15:28:47 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:46.001 15:28:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:46.001 15:28:47 -- common/autotest_common.sh@10 -- # set +x 00:05:46.001 ************************************ 00:05:46.001 START TEST accel_comp 00:05:46.001 ************************************ 00:05:46.001 15:28:47 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:46.001 15:28:47 -- accel/accel.sh@16 -- # local accel_opc 00:05:46.001 15:28:47 -- accel/accel.sh@17 -- # local accel_module 00:05:46.001 15:28:47 -- accel/accel.sh@19 -- # IFS=: 00:05:46.001 15:28:47 -- accel/accel.sh@19 -- # read -r var val 00:05:46.001 15:28:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:46.001 15:28:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:46.001 15:28:47 -- accel/accel.sh@12 -- # build_accel_config 00:05:46.001 15:28:47 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:46.001 15:28:47 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:46.001 15:28:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.001 15:28:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.001 15:28:47 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:46.001 15:28:47 -- accel/accel.sh@40 -- # local IFS=, 00:05:46.001 15:28:47 -- accel/accel.sh@41 -- # jq -r . 00:05:46.001 [2024-04-17 15:28:47.229589] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:05:46.001 [2024-04-17 15:28:47.229662] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61445 ] 00:05:46.001 [2024-04-17 15:28:47.364102] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.260 [2024-04-17 15:28:47.508680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.260 15:28:47 -- accel/accel.sh@20 -- # val= 00:05:46.260 15:28:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.260 15:28:47 -- accel/accel.sh@19 -- # IFS=: 00:05:46.260 15:28:47 -- accel/accel.sh@19 -- # read -r var val 00:05:46.260 15:28:47 -- accel/accel.sh@20 -- # val= 00:05:46.260 15:28:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.260 15:28:47 -- accel/accel.sh@19 -- # IFS=: 00:05:46.260 15:28:47 -- accel/accel.sh@19 -- # read -r var val 00:05:46.260 15:28:47 -- accel/accel.sh@20 -- # val= 00:05:46.260 15:28:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.260 15:28:47 -- accel/accel.sh@19 -- # IFS=: 00:05:46.260 15:28:47 -- accel/accel.sh@19 -- # read -r var val 00:05:46.260 15:28:47 -- accel/accel.sh@20 -- # val=0x1 00:05:46.260 15:28:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.260 15:28:47 -- accel/accel.sh@19 -- # IFS=: 00:05:46.260 15:28:47 -- accel/accel.sh@19 -- # read -r var val 00:05:46.260 15:28:47 -- accel/accel.sh@20 -- # val= 00:05:46.260 15:28:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.260 15:28:47 -- accel/accel.sh@19 -- # IFS=: 00:05:46.260 15:28:47 -- accel/accel.sh@19 -- # read -r var val 00:05:46.260 15:28:47 -- accel/accel.sh@20 -- # val= 00:05:46.260 15:28:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.260 15:28:47 -- accel/accel.sh@19 -- # IFS=: 00:05:46.260 15:28:47 -- accel/accel.sh@19 -- # read -r var val 00:05:46.260 15:28:47 -- accel/accel.sh@20 -- # val=compress 00:05:46.260 15:28:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.260 15:28:47 -- accel/accel.sh@23 -- # accel_opc=compress 00:05:46.260 15:28:47 -- accel/accel.sh@19 -- # IFS=: 00:05:46.260 15:28:47 -- accel/accel.sh@19 -- # read -r var val 00:05:46.260 15:28:47 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:46.260 15:28:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.260 15:28:47 -- accel/accel.sh@19 -- # IFS=: 00:05:46.260 15:28:47 -- accel/accel.sh@19 -- # read -r var val 00:05:46.260 15:28:47 -- accel/accel.sh@20 -- # val= 00:05:46.260 15:28:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.260 15:28:47 -- accel/accel.sh@19 -- # IFS=: 00:05:46.260 15:28:47 -- accel/accel.sh@19 -- # read -r var val 00:05:46.260 15:28:47 -- accel/accel.sh@20 -- # val=software 00:05:46.260 15:28:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.260 15:28:47 -- accel/accel.sh@22 -- # accel_module=software 00:05:46.260 15:28:47 -- accel/accel.sh@19 -- # IFS=: 00:05:46.260 15:28:47 -- accel/accel.sh@19 -- # read -r var val 00:05:46.260 15:28:47 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:46.260 15:28:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.260 15:28:47 -- accel/accel.sh@19 -- # IFS=: 00:05:46.260 15:28:47 -- accel/accel.sh@19 -- # read -r var val 00:05:46.260 15:28:47 -- accel/accel.sh@20 -- # val=32 00:05:46.260 15:28:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.260 15:28:47 -- accel/accel.sh@19 -- # IFS=: 00:05:46.260 15:28:47 -- accel/accel.sh@19 -- # read -r var val 00:05:46.260 15:28:47 -- accel/accel.sh@20 -- # val=32 00:05:46.260 15:28:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.260 15:28:47 -- accel/accel.sh@19 -- # IFS=: 00:05:46.260 15:28:47 -- accel/accel.sh@19 -- # read -r var val 00:05:46.260 15:28:47 -- accel/accel.sh@20 -- # val=1 00:05:46.260 15:28:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.260 15:28:47 -- accel/accel.sh@19 -- # IFS=: 00:05:46.260 15:28:47 -- accel/accel.sh@19 -- # read -r var val 00:05:46.260 15:28:47 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:46.260 15:28:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.260 15:28:47 -- accel/accel.sh@19 -- # IFS=: 00:05:46.260 15:28:47 -- accel/accel.sh@19 -- # read -r var val 00:05:46.260 15:28:47 -- accel/accel.sh@20 -- # val=No 00:05:46.260 15:28:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.260 15:28:47 -- accel/accel.sh@19 -- # IFS=: 00:05:46.260 15:28:47 -- accel/accel.sh@19 -- # read -r var val 00:05:46.260 15:28:47 -- accel/accel.sh@20 -- # val= 00:05:46.260 15:28:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.260 15:28:47 -- accel/accel.sh@19 -- # IFS=: 00:05:46.260 15:28:47 -- accel/accel.sh@19 -- # read -r var val 00:05:46.260 15:28:47 -- accel/accel.sh@20 -- # val= 00:05:46.260 15:28:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.260 15:28:47 -- accel/accel.sh@19 -- # IFS=: 00:05:46.260 15:28:47 -- accel/accel.sh@19 -- # read -r var val 00:05:47.634 15:28:48 -- accel/accel.sh@20 -- # val= 00:05:47.634 15:28:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.634 15:28:48 -- accel/accel.sh@19 -- # IFS=: 00:05:47.634 15:28:48 -- accel/accel.sh@19 -- # read -r var val 00:05:47.634 15:28:48 -- accel/accel.sh@20 -- # val= 00:05:47.634 15:28:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.634 15:28:48 -- accel/accel.sh@19 -- # IFS=: 00:05:47.634 15:28:48 -- accel/accel.sh@19 -- # read -r var val 00:05:47.634 15:28:48 -- accel/accel.sh@20 -- # val= 00:05:47.634 15:28:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.634 15:28:48 -- accel/accel.sh@19 -- # IFS=: 00:05:47.634 15:28:48 -- accel/accel.sh@19 -- # read -r var val 00:05:47.634 15:28:48 -- accel/accel.sh@20 -- # val= 00:05:47.634 15:28:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.634 15:28:48 -- accel/accel.sh@19 -- # IFS=: 00:05:47.634 15:28:48 -- accel/accel.sh@19 -- # read -r var val 00:05:47.634 15:28:48 -- accel/accel.sh@20 -- # val= 00:05:47.634 15:28:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.634 15:28:48 -- accel/accel.sh@19 -- # IFS=: 00:05:47.634 15:28:48 -- accel/accel.sh@19 -- # read -r var val 00:05:47.634 15:28:48 -- accel/accel.sh@20 -- # val= 00:05:47.634 15:28:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.634 15:28:48 -- accel/accel.sh@19 -- # IFS=: 00:05:47.634 15:28:48 -- accel/accel.sh@19 -- # read -r var val 00:05:47.634 15:28:48 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:47.634 15:28:48 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:47.634 15:28:48 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:47.634 00:05:47.634 real 0m1.655s 00:05:47.634 user 0m1.400s 00:05:47.634 sys 0m0.156s 00:05:47.634 15:28:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:47.634 15:28:48 -- common/autotest_common.sh@10 -- # set +x 00:05:47.634 ************************************ 00:05:47.634 END TEST accel_comp 00:05:47.634 ************************************ 00:05:47.634 15:28:48 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:47.634 15:28:48 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:47.634 15:28:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:47.634 15:28:48 -- common/autotest_common.sh@10 -- # set +x 00:05:47.634 ************************************ 00:05:47.634 START TEST accel_decomp 00:05:47.634 ************************************ 00:05:47.634 15:28:48 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:47.634 15:28:48 -- accel/accel.sh@16 -- # local accel_opc 00:05:47.635 15:28:48 -- accel/accel.sh@17 -- # local accel_module 00:05:47.635 15:28:48 -- accel/accel.sh@19 -- # IFS=: 00:05:47.635 15:28:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:47.635 15:28:48 -- accel/accel.sh@19 -- # read -r var val 00:05:47.635 15:28:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:47.635 15:28:48 -- accel/accel.sh@12 -- # build_accel_config 00:05:47.635 15:28:48 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:47.635 15:28:48 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:47.635 15:28:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.635 15:28:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.635 15:28:48 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:47.635 15:28:48 -- accel/accel.sh@40 -- # local IFS=, 00:05:47.635 15:28:48 -- accel/accel.sh@41 -- # jq -r . 00:05:47.635 [2024-04-17 15:28:49.016491] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:05:47.635 [2024-04-17 15:28:49.016735] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61481 ] 00:05:47.893 [2024-04-17 15:28:49.151830] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.893 [2024-04-17 15:28:49.280544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.151 15:28:49 -- accel/accel.sh@20 -- # val= 00:05:48.151 15:28:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.151 15:28:49 -- accel/accel.sh@19 -- # IFS=: 00:05:48.151 15:28:49 -- accel/accel.sh@19 -- # read -r var val 00:05:48.151 15:28:49 -- accel/accel.sh@20 -- # val= 00:05:48.151 15:28:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.151 15:28:49 -- accel/accel.sh@19 -- # IFS=: 00:05:48.151 15:28:49 -- accel/accel.sh@19 -- # read -r var val 00:05:48.151 15:28:49 -- accel/accel.sh@20 -- # val= 00:05:48.151 15:28:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.151 15:28:49 -- accel/accel.sh@19 -- # IFS=: 00:05:48.151 15:28:49 -- accel/accel.sh@19 -- # read -r var val 00:05:48.151 15:28:49 -- accel/accel.sh@20 -- # val=0x1 00:05:48.151 15:28:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.151 15:28:49 -- accel/accel.sh@19 -- # IFS=: 00:05:48.151 15:28:49 -- accel/accel.sh@19 -- # read -r var val 00:05:48.151 15:28:49 -- accel/accel.sh@20 -- # val= 00:05:48.151 15:28:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.151 15:28:49 -- accel/accel.sh@19 -- # IFS=: 00:05:48.151 15:28:49 -- accel/accel.sh@19 -- # read -r var val 00:05:48.151 15:28:49 -- accel/accel.sh@20 -- # val= 00:05:48.151 15:28:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.151 15:28:49 -- accel/accel.sh@19 -- # IFS=: 00:05:48.151 15:28:49 -- accel/accel.sh@19 -- # read -r var val 00:05:48.151 15:28:49 -- accel/accel.sh@20 -- # val=decompress 00:05:48.151 15:28:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.151 15:28:49 -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:48.151 15:28:49 -- accel/accel.sh@19 -- # IFS=: 00:05:48.151 15:28:49 -- accel/accel.sh@19 -- # read -r var val 00:05:48.151 15:28:49 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:48.151 15:28:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.151 15:28:49 -- accel/accel.sh@19 -- # IFS=: 00:05:48.151 15:28:49 -- accel/accel.sh@19 -- # read -r var val 00:05:48.151 15:28:49 -- accel/accel.sh@20 -- # val= 00:05:48.151 15:28:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.151 15:28:49 -- accel/accel.sh@19 -- # IFS=: 00:05:48.151 15:28:49 -- accel/accel.sh@19 -- # read -r var val 00:05:48.151 15:28:49 -- accel/accel.sh@20 -- # val=software 00:05:48.151 15:28:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.151 15:28:49 -- accel/accel.sh@22 -- # accel_module=software 00:05:48.151 15:28:49 -- accel/accel.sh@19 -- # IFS=: 00:05:48.151 15:28:49 -- accel/accel.sh@19 -- # read -r var val 00:05:48.151 15:28:49 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:48.151 15:28:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.151 15:28:49 -- accel/accel.sh@19 -- # IFS=: 00:05:48.151 15:28:49 -- accel/accel.sh@19 -- # read -r var val 00:05:48.151 15:28:49 -- accel/accel.sh@20 -- # val=32 00:05:48.151 15:28:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.151 15:28:49 -- accel/accel.sh@19 -- # IFS=: 00:05:48.151 15:28:49 -- accel/accel.sh@19 -- # read -r var val 00:05:48.151 15:28:49 -- accel/accel.sh@20 -- # val=32 00:05:48.151 15:28:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.151 15:28:49 -- accel/accel.sh@19 -- # IFS=: 00:05:48.151 15:28:49 -- accel/accel.sh@19 -- # read -r var val 00:05:48.151 15:28:49 -- accel/accel.sh@20 -- # val=1 00:05:48.151 15:28:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.151 15:28:49 -- accel/accel.sh@19 -- # IFS=: 00:05:48.151 15:28:49 -- accel/accel.sh@19 -- # read -r var val 00:05:48.152 15:28:49 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:48.152 15:28:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.152 15:28:49 -- accel/accel.sh@19 -- # IFS=: 00:05:48.152 15:28:49 -- accel/accel.sh@19 -- # read -r var val 00:05:48.152 15:28:49 -- accel/accel.sh@20 -- # val=Yes 00:05:48.152 15:28:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.152 15:28:49 -- accel/accel.sh@19 -- # IFS=: 00:05:48.152 15:28:49 -- accel/accel.sh@19 -- # read -r var val 00:05:48.152 15:28:49 -- accel/accel.sh@20 -- # val= 00:05:48.152 15:28:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.152 15:28:49 -- accel/accel.sh@19 -- # IFS=: 00:05:48.152 15:28:49 -- accel/accel.sh@19 -- # read -r var val 00:05:48.152 15:28:49 -- accel/accel.sh@20 -- # val= 00:05:48.152 15:28:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.152 15:28:49 -- accel/accel.sh@19 -- # IFS=: 00:05:48.152 15:28:49 -- accel/accel.sh@19 -- # read -r var val 00:05:49.528 15:28:50 -- accel/accel.sh@20 -- # val= 00:05:49.528 15:28:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.528 15:28:50 -- accel/accel.sh@19 -- # IFS=: 00:05:49.528 15:28:50 -- accel/accel.sh@19 -- # read -r var val 00:05:49.528 15:28:50 -- accel/accel.sh@20 -- # val= 00:05:49.528 15:28:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.528 15:28:50 -- accel/accel.sh@19 -- # IFS=: 00:05:49.528 15:28:50 -- accel/accel.sh@19 -- # read -r var val 00:05:49.528 15:28:50 -- accel/accel.sh@20 -- # val= 00:05:49.528 15:28:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.528 15:28:50 -- accel/accel.sh@19 -- # IFS=: 00:05:49.528 15:28:50 -- accel/accel.sh@19 -- # read -r var val 00:05:49.528 15:28:50 -- accel/accel.sh@20 -- # val= 00:05:49.528 15:28:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.528 15:28:50 -- accel/accel.sh@19 -- # IFS=: 00:05:49.528 15:28:50 -- accel/accel.sh@19 -- # read -r var val 00:05:49.528 15:28:50 -- accel/accel.sh@20 -- # val= 00:05:49.528 15:28:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.528 15:28:50 -- accel/accel.sh@19 -- # IFS=: 00:05:49.528 15:28:50 -- accel/accel.sh@19 -- # read -r var val 00:05:49.528 15:28:50 -- accel/accel.sh@20 -- # val= 00:05:49.528 15:28:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.528 15:28:50 -- accel/accel.sh@19 -- # IFS=: 00:05:49.528 15:28:50 -- accel/accel.sh@19 -- # read -r var val 00:05:49.528 15:28:50 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:49.528 15:28:50 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:49.528 15:28:50 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:49.528 00:05:49.528 real 0m1.637s 00:05:49.528 user 0m1.394s 00:05:49.528 sys 0m0.144s 00:05:49.528 ************************************ 00:05:49.528 END TEST accel_decomp 00:05:49.528 ************************************ 00:05:49.528 15:28:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:49.528 15:28:50 -- common/autotest_common.sh@10 -- # set +x 00:05:49.528 15:28:50 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:49.528 15:28:50 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:05:49.528 15:28:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:49.528 15:28:50 -- common/autotest_common.sh@10 -- # set +x 00:05:49.528 ************************************ 00:05:49.528 START TEST accel_decmop_full 00:05:49.528 ************************************ 00:05:49.528 15:28:50 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:49.528 15:28:50 -- accel/accel.sh@16 -- # local accel_opc 00:05:49.528 15:28:50 -- accel/accel.sh@17 -- # local accel_module 00:05:49.528 15:28:50 -- accel/accel.sh@19 -- # IFS=: 00:05:49.528 15:28:50 -- accel/accel.sh@19 -- # read -r var val 00:05:49.528 15:28:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:49.528 15:28:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:49.528 15:28:50 -- accel/accel.sh@12 -- # build_accel_config 00:05:49.528 15:28:50 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:49.528 15:28:50 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:49.528 15:28:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.528 15:28:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.528 15:28:50 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:49.528 15:28:50 -- accel/accel.sh@40 -- # local IFS=, 00:05:49.528 15:28:50 -- accel/accel.sh@41 -- # jq -r . 00:05:49.528 [2024-04-17 15:28:50.782996] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:05:49.528 [2024-04-17 15:28:50.783107] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61528 ] 00:05:49.528 [2024-04-17 15:28:50.922874] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.787 [2024-04-17 15:28:51.096229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.787 15:28:51 -- accel/accel.sh@20 -- # val= 00:05:49.787 15:28:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.787 15:28:51 -- accel/accel.sh@19 -- # IFS=: 00:05:49.787 15:28:51 -- accel/accel.sh@19 -- # read -r var val 00:05:49.787 15:28:51 -- accel/accel.sh@20 -- # val= 00:05:49.787 15:28:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.787 15:28:51 -- accel/accel.sh@19 -- # IFS=: 00:05:49.787 15:28:51 -- accel/accel.sh@19 -- # read -r var val 00:05:49.787 15:28:51 -- accel/accel.sh@20 -- # val= 00:05:49.787 15:28:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.787 15:28:51 -- accel/accel.sh@19 -- # IFS=: 00:05:49.787 15:28:51 -- accel/accel.sh@19 -- # read -r var val 00:05:49.787 15:28:51 -- accel/accel.sh@20 -- # val=0x1 00:05:49.787 15:28:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.787 15:28:51 -- accel/accel.sh@19 -- # IFS=: 00:05:49.787 15:28:51 -- accel/accel.sh@19 -- # read -r var val 00:05:49.787 15:28:51 -- accel/accel.sh@20 -- # val= 00:05:49.787 15:28:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.787 15:28:51 -- accel/accel.sh@19 -- # IFS=: 00:05:49.787 15:28:51 -- accel/accel.sh@19 -- # read -r var val 00:05:49.787 15:28:51 -- accel/accel.sh@20 -- # val= 00:05:49.787 15:28:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.787 15:28:51 -- accel/accel.sh@19 -- # IFS=: 00:05:49.787 15:28:51 -- accel/accel.sh@19 -- # read -r var val 00:05:49.787 15:28:51 -- accel/accel.sh@20 -- # val=decompress 00:05:49.787 15:28:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.787 15:28:51 -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:49.787 15:28:51 -- accel/accel.sh@19 -- # IFS=: 00:05:49.787 15:28:51 -- accel/accel.sh@19 -- # read -r var val 00:05:49.787 15:28:51 -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:49.787 15:28:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.787 15:28:51 -- accel/accel.sh@19 -- # IFS=: 00:05:49.787 15:28:51 -- accel/accel.sh@19 -- # read -r var val 00:05:49.787 15:28:51 -- accel/accel.sh@20 -- # val= 00:05:49.787 15:28:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.787 15:28:51 -- accel/accel.sh@19 -- # IFS=: 00:05:49.787 15:28:51 -- accel/accel.sh@19 -- # read -r var val 00:05:49.787 15:28:51 -- accel/accel.sh@20 -- # val=software 00:05:49.787 15:28:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.787 15:28:51 -- accel/accel.sh@22 -- # accel_module=software 00:05:49.787 15:28:51 -- accel/accel.sh@19 -- # IFS=: 00:05:49.787 15:28:51 -- accel/accel.sh@19 -- # read -r var val 00:05:49.787 15:28:51 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:49.787 15:28:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.787 15:28:51 -- accel/accel.sh@19 -- # IFS=: 00:05:49.787 15:28:51 -- accel/accel.sh@19 -- # read -r var val 00:05:49.787 15:28:51 -- accel/accel.sh@20 -- # val=32 00:05:49.787 15:28:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.787 15:28:51 -- accel/accel.sh@19 -- # IFS=: 00:05:49.787 15:28:51 -- accel/accel.sh@19 -- # read -r var val 00:05:49.787 15:28:51 -- accel/accel.sh@20 -- # val=32 00:05:49.787 15:28:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.787 15:28:51 -- accel/accel.sh@19 -- # IFS=: 00:05:49.787 15:28:51 -- accel/accel.sh@19 -- # read -r var val 00:05:49.787 15:28:51 -- accel/accel.sh@20 -- # val=1 00:05:49.787 15:28:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.787 15:28:51 -- accel/accel.sh@19 -- # IFS=: 00:05:49.787 15:28:51 -- accel/accel.sh@19 -- # read -r var val 00:05:49.787 15:28:51 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:49.787 15:28:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.787 15:28:51 -- accel/accel.sh@19 -- # IFS=: 00:05:49.787 15:28:51 -- accel/accel.sh@19 -- # read -r var val 00:05:49.787 15:28:51 -- accel/accel.sh@20 -- # val=Yes 00:05:49.787 15:28:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.787 15:28:51 -- accel/accel.sh@19 -- # IFS=: 00:05:49.787 15:28:51 -- accel/accel.sh@19 -- # read -r var val 00:05:49.787 15:28:51 -- accel/accel.sh@20 -- # val= 00:05:49.787 15:28:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.787 15:28:51 -- accel/accel.sh@19 -- # IFS=: 00:05:49.787 15:28:51 -- accel/accel.sh@19 -- # read -r var val 00:05:49.787 15:28:51 -- accel/accel.sh@20 -- # val= 00:05:49.787 15:28:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.787 15:28:51 -- accel/accel.sh@19 -- # IFS=: 00:05:49.787 15:28:51 -- accel/accel.sh@19 -- # read -r var val 00:05:51.162 15:28:52 -- accel/accel.sh@20 -- # val= 00:05:51.162 15:28:52 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.162 15:28:52 -- accel/accel.sh@19 -- # IFS=: 00:05:51.162 15:28:52 -- accel/accel.sh@19 -- # read -r var val 00:05:51.162 15:28:52 -- accel/accel.sh@20 -- # val= 00:05:51.162 15:28:52 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.162 15:28:52 -- accel/accel.sh@19 -- # IFS=: 00:05:51.162 15:28:52 -- accel/accel.sh@19 -- # read -r var val 00:05:51.162 15:28:52 -- accel/accel.sh@20 -- # val= 00:05:51.162 15:28:52 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.162 15:28:52 -- accel/accel.sh@19 -- # IFS=: 00:05:51.162 15:28:52 -- accel/accel.sh@19 -- # read -r var val 00:05:51.162 15:28:52 -- accel/accel.sh@20 -- # val= 00:05:51.162 15:28:52 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.162 15:28:52 -- accel/accel.sh@19 -- # IFS=: 00:05:51.162 15:28:52 -- accel/accel.sh@19 -- # read -r var val 00:05:51.162 15:28:52 -- accel/accel.sh@20 -- # val= 00:05:51.162 15:28:52 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.162 15:28:52 -- accel/accel.sh@19 -- # IFS=: 00:05:51.162 15:28:52 -- accel/accel.sh@19 -- # read -r var val 00:05:51.162 15:28:52 -- accel/accel.sh@20 -- # val= 00:05:51.162 15:28:52 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.162 15:28:52 -- accel/accel.sh@19 -- # IFS=: 00:05:51.162 15:28:52 -- accel/accel.sh@19 -- # read -r var val 00:05:51.162 15:28:52 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:51.162 15:28:52 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:51.162 15:28:52 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:51.162 00:05:51.162 real 0m1.709s 00:05:51.162 user 0m1.456s 00:05:51.162 sys 0m0.155s 00:05:51.162 15:28:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:51.162 15:28:52 -- common/autotest_common.sh@10 -- # set +x 00:05:51.162 ************************************ 00:05:51.162 END TEST accel_decmop_full 00:05:51.162 ************************************ 00:05:51.162 15:28:52 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:51.162 15:28:52 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:05:51.162 15:28:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:51.162 15:28:52 -- common/autotest_common.sh@10 -- # set +x 00:05:51.162 ************************************ 00:05:51.162 START TEST accel_decomp_mcore 00:05:51.162 ************************************ 00:05:51.162 15:28:52 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:51.162 15:28:52 -- accel/accel.sh@16 -- # local accel_opc 00:05:51.162 15:28:52 -- accel/accel.sh@17 -- # local accel_module 00:05:51.162 15:28:52 -- accel/accel.sh@19 -- # IFS=: 00:05:51.162 15:28:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:51.162 15:28:52 -- accel/accel.sh@19 -- # read -r var val 00:05:51.162 15:28:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:51.162 15:28:52 -- accel/accel.sh@12 -- # build_accel_config 00:05:51.162 15:28:52 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:51.162 15:28:52 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:51.162 15:28:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.163 15:28:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.163 15:28:52 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:51.163 15:28:52 -- accel/accel.sh@40 -- # local IFS=, 00:05:51.163 15:28:52 -- accel/accel.sh@41 -- # jq -r . 00:05:51.422 [2024-04-17 15:28:52.642013] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:05:51.422 [2024-04-17 15:28:52.642234] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61572 ] 00:05:51.422 [2024-04-17 15:28:52.787715] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:51.681 [2024-04-17 15:28:52.925223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.681 [2024-04-17 15:28:52.925302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:51.681 [2024-04-17 15:28:52.925428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:51.681 [2024-04-17 15:28:52.925433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.681 15:28:53 -- accel/accel.sh@20 -- # val= 00:05:51.681 15:28:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.681 15:28:53 -- accel/accel.sh@19 -- # IFS=: 00:05:51.681 15:28:53 -- accel/accel.sh@19 -- # read -r var val 00:05:51.681 15:28:53 -- accel/accel.sh@20 -- # val= 00:05:51.681 15:28:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.681 15:28:53 -- accel/accel.sh@19 -- # IFS=: 00:05:51.681 15:28:53 -- accel/accel.sh@19 -- # read -r var val 00:05:51.681 15:28:53 -- accel/accel.sh@20 -- # val= 00:05:51.681 15:28:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.681 15:28:53 -- accel/accel.sh@19 -- # IFS=: 00:05:51.681 15:28:53 -- accel/accel.sh@19 -- # read -r var val 00:05:51.681 15:28:53 -- accel/accel.sh@20 -- # val=0xf 00:05:51.681 15:28:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.681 15:28:53 -- accel/accel.sh@19 -- # IFS=: 00:05:51.681 15:28:53 -- accel/accel.sh@19 -- # read -r var val 00:05:51.681 15:28:53 -- accel/accel.sh@20 -- # val= 00:05:51.681 15:28:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.681 15:28:53 -- accel/accel.sh@19 -- # IFS=: 00:05:51.681 15:28:53 -- accel/accel.sh@19 -- # read -r var val 00:05:51.681 15:28:53 -- accel/accel.sh@20 -- # val= 00:05:51.681 15:28:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.681 15:28:53 -- accel/accel.sh@19 -- # IFS=: 00:05:51.681 15:28:53 -- accel/accel.sh@19 -- # read -r var val 00:05:51.681 15:28:53 -- accel/accel.sh@20 -- # val=decompress 00:05:51.681 15:28:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.681 15:28:53 -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:51.681 15:28:53 -- accel/accel.sh@19 -- # IFS=: 00:05:51.681 15:28:53 -- accel/accel.sh@19 -- # read -r var val 00:05:51.681 15:28:53 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:51.681 15:28:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.681 15:28:53 -- accel/accel.sh@19 -- # IFS=: 00:05:51.681 15:28:53 -- accel/accel.sh@19 -- # read -r var val 00:05:51.681 15:28:53 -- accel/accel.sh@20 -- # val= 00:05:51.681 15:28:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.681 15:28:53 -- accel/accel.sh@19 -- # IFS=: 00:05:51.681 15:28:53 -- accel/accel.sh@19 -- # read -r var val 00:05:51.681 15:28:53 -- accel/accel.sh@20 -- # val=software 00:05:51.681 15:28:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.681 15:28:53 -- accel/accel.sh@22 -- # accel_module=software 00:05:51.681 15:28:53 -- accel/accel.sh@19 -- # IFS=: 00:05:51.681 15:28:53 -- accel/accel.sh@19 -- # read -r var val 00:05:51.681 15:28:53 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:51.681 15:28:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.681 15:28:53 -- accel/accel.sh@19 -- # IFS=: 00:05:51.681 15:28:53 -- accel/accel.sh@19 -- # read -r var val 00:05:51.681 15:28:53 -- accel/accel.sh@20 -- # val=32 00:05:51.681 15:28:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.681 15:28:53 -- accel/accel.sh@19 -- # IFS=: 00:05:51.681 15:28:53 -- accel/accel.sh@19 -- # read -r var val 00:05:51.681 15:28:53 -- accel/accel.sh@20 -- # val=32 00:05:51.681 15:28:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.681 15:28:53 -- accel/accel.sh@19 -- # IFS=: 00:05:51.681 15:28:53 -- accel/accel.sh@19 -- # read -r var val 00:05:51.681 15:28:53 -- accel/accel.sh@20 -- # val=1 00:05:51.681 15:28:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.681 15:28:53 -- accel/accel.sh@19 -- # IFS=: 00:05:51.681 15:28:53 -- accel/accel.sh@19 -- # read -r var val 00:05:51.681 15:28:53 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:51.681 15:28:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.681 15:28:53 -- accel/accel.sh@19 -- # IFS=: 00:05:51.681 15:28:53 -- accel/accel.sh@19 -- # read -r var val 00:05:51.681 15:28:53 -- accel/accel.sh@20 -- # val=Yes 00:05:51.681 15:28:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.681 15:28:53 -- accel/accel.sh@19 -- # IFS=: 00:05:51.681 15:28:53 -- accel/accel.sh@19 -- # read -r var val 00:05:51.681 15:28:53 -- accel/accel.sh@20 -- # val= 00:05:51.681 15:28:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.681 15:28:53 -- accel/accel.sh@19 -- # IFS=: 00:05:51.681 15:28:53 -- accel/accel.sh@19 -- # read -r var val 00:05:51.681 15:28:53 -- accel/accel.sh@20 -- # val= 00:05:51.681 15:28:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.681 15:28:53 -- accel/accel.sh@19 -- # IFS=: 00:05:51.681 15:28:53 -- accel/accel.sh@19 -- # read -r var val 00:05:53.081 15:28:54 -- accel/accel.sh@20 -- # val= 00:05:53.081 15:28:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.081 15:28:54 -- accel/accel.sh@19 -- # IFS=: 00:05:53.081 15:28:54 -- accel/accel.sh@19 -- # read -r var val 00:05:53.081 15:28:54 -- accel/accel.sh@20 -- # val= 00:05:53.081 15:28:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.081 15:28:54 -- accel/accel.sh@19 -- # IFS=: 00:05:53.081 15:28:54 -- accel/accel.sh@19 -- # read -r var val 00:05:53.081 15:28:54 -- accel/accel.sh@20 -- # val= 00:05:53.081 15:28:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.081 15:28:54 -- accel/accel.sh@19 -- # IFS=: 00:05:53.081 15:28:54 -- accel/accel.sh@19 -- # read -r var val 00:05:53.081 15:28:54 -- accel/accel.sh@20 -- # val= 00:05:53.081 15:28:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.081 15:28:54 -- accel/accel.sh@19 -- # IFS=: 00:05:53.081 15:28:54 -- accel/accel.sh@19 -- # read -r var val 00:05:53.081 15:28:54 -- accel/accel.sh@20 -- # val= 00:05:53.081 15:28:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.081 15:28:54 -- accel/accel.sh@19 -- # IFS=: 00:05:53.081 15:28:54 -- accel/accel.sh@19 -- # read -r var val 00:05:53.081 15:28:54 -- accel/accel.sh@20 -- # val= 00:05:53.081 15:28:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.081 15:28:54 -- accel/accel.sh@19 -- # IFS=: 00:05:53.081 15:28:54 -- accel/accel.sh@19 -- # read -r var val 00:05:53.081 15:28:54 -- accel/accel.sh@20 -- # val= 00:05:53.081 15:28:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.081 15:28:54 -- accel/accel.sh@19 -- # IFS=: 00:05:53.081 15:28:54 -- accel/accel.sh@19 -- # read -r var val 00:05:53.081 15:28:54 -- accel/accel.sh@20 -- # val= 00:05:53.081 15:28:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.081 15:28:54 -- accel/accel.sh@19 -- # IFS=: 00:05:53.081 15:28:54 -- accel/accel.sh@19 -- # read -r var val 00:05:53.081 15:28:54 -- accel/accel.sh@20 -- # val= 00:05:53.081 15:28:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.081 15:28:54 -- accel/accel.sh@19 -- # IFS=: 00:05:53.081 15:28:54 -- accel/accel.sh@19 -- # read -r var val 00:05:53.081 15:28:54 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:53.081 15:28:54 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:53.081 15:28:54 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:53.081 00:05:53.081 real 0m1.702s 00:05:53.081 user 0m5.009s 00:05:53.081 sys 0m0.165s 00:05:53.081 15:28:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:53.081 15:28:54 -- common/autotest_common.sh@10 -- # set +x 00:05:53.081 ************************************ 00:05:53.081 END TEST accel_decomp_mcore 00:05:53.081 ************************************ 00:05:53.081 15:28:54 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:53.081 15:28:54 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:05:53.081 15:28:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:53.081 15:28:54 -- common/autotest_common.sh@10 -- # set +x 00:05:53.081 ************************************ 00:05:53.081 START TEST accel_decomp_full_mcore 00:05:53.081 ************************************ 00:05:53.081 15:28:54 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:53.081 15:28:54 -- accel/accel.sh@16 -- # local accel_opc 00:05:53.081 15:28:54 -- accel/accel.sh@17 -- # local accel_module 00:05:53.081 15:28:54 -- accel/accel.sh@19 -- # IFS=: 00:05:53.081 15:28:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:53.081 15:28:54 -- accel/accel.sh@19 -- # read -r var val 00:05:53.081 15:28:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:53.081 15:28:54 -- accel/accel.sh@12 -- # build_accel_config 00:05:53.081 15:28:54 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.081 15:28:54 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.081 15:28:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.081 15:28:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.081 15:28:54 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:53.081 15:28:54 -- accel/accel.sh@40 -- # local IFS=, 00:05:53.081 15:28:54 -- accel/accel.sh@41 -- # jq -r . 00:05:53.081 [2024-04-17 15:28:54.448615] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:05:53.081 [2024-04-17 15:28:54.448744] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61614 ] 00:05:53.340 [2024-04-17 15:28:54.583603] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:53.340 [2024-04-17 15:28:54.732146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.340 [2024-04-17 15:28:54.732330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:53.340 [2024-04-17 15:28:54.732469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:53.340 [2024-04-17 15:28:54.732846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.599 15:28:54 -- accel/accel.sh@20 -- # val= 00:05:53.599 15:28:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.599 15:28:54 -- accel/accel.sh@19 -- # IFS=: 00:05:53.599 15:28:54 -- accel/accel.sh@19 -- # read -r var val 00:05:53.599 15:28:54 -- accel/accel.sh@20 -- # val= 00:05:53.599 15:28:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.599 15:28:54 -- accel/accel.sh@19 -- # IFS=: 00:05:53.599 15:28:54 -- accel/accel.sh@19 -- # read -r var val 00:05:53.599 15:28:54 -- accel/accel.sh@20 -- # val= 00:05:53.599 15:28:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.599 15:28:54 -- accel/accel.sh@19 -- # IFS=: 00:05:53.599 15:28:54 -- accel/accel.sh@19 -- # read -r var val 00:05:53.599 15:28:54 -- accel/accel.sh@20 -- # val=0xf 00:05:53.599 15:28:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.599 15:28:54 -- accel/accel.sh@19 -- # IFS=: 00:05:53.599 15:28:54 -- accel/accel.sh@19 -- # read -r var val 00:05:53.599 15:28:54 -- accel/accel.sh@20 -- # val= 00:05:53.599 15:28:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.599 15:28:54 -- accel/accel.sh@19 -- # IFS=: 00:05:53.599 15:28:54 -- accel/accel.sh@19 -- # read -r var val 00:05:53.599 15:28:54 -- accel/accel.sh@20 -- # val= 00:05:53.599 15:28:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.599 15:28:54 -- accel/accel.sh@19 -- # IFS=: 00:05:53.599 15:28:54 -- accel/accel.sh@19 -- # read -r var val 00:05:53.599 15:28:54 -- accel/accel.sh@20 -- # val=decompress 00:05:53.599 15:28:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.599 15:28:54 -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:53.599 15:28:54 -- accel/accel.sh@19 -- # IFS=: 00:05:53.599 15:28:54 -- accel/accel.sh@19 -- # read -r var val 00:05:53.599 15:28:54 -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:53.599 15:28:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.599 15:28:54 -- accel/accel.sh@19 -- # IFS=: 00:05:53.599 15:28:54 -- accel/accel.sh@19 -- # read -r var val 00:05:53.599 15:28:54 -- accel/accel.sh@20 -- # val= 00:05:53.599 15:28:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.599 15:28:54 -- accel/accel.sh@19 -- # IFS=: 00:05:53.599 15:28:54 -- accel/accel.sh@19 -- # read -r var val 00:05:53.599 15:28:54 -- accel/accel.sh@20 -- # val=software 00:05:53.599 15:28:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.599 15:28:54 -- accel/accel.sh@22 -- # accel_module=software 00:05:53.599 15:28:54 -- accel/accel.sh@19 -- # IFS=: 00:05:53.599 15:28:54 -- accel/accel.sh@19 -- # read -r var val 00:05:53.599 15:28:54 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:53.599 15:28:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.599 15:28:54 -- accel/accel.sh@19 -- # IFS=: 00:05:53.599 15:28:54 -- accel/accel.sh@19 -- # read -r var val 00:05:53.599 15:28:54 -- accel/accel.sh@20 -- # val=32 00:05:53.599 15:28:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.599 15:28:54 -- accel/accel.sh@19 -- # IFS=: 00:05:53.599 15:28:54 -- accel/accel.sh@19 -- # read -r var val 00:05:53.599 15:28:54 -- accel/accel.sh@20 -- # val=32 00:05:53.599 15:28:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.599 15:28:54 -- accel/accel.sh@19 -- # IFS=: 00:05:53.599 15:28:54 -- accel/accel.sh@19 -- # read -r var val 00:05:53.599 15:28:54 -- accel/accel.sh@20 -- # val=1 00:05:53.599 15:28:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.599 15:28:54 -- accel/accel.sh@19 -- # IFS=: 00:05:53.599 15:28:54 -- accel/accel.sh@19 -- # read -r var val 00:05:53.599 15:28:54 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:53.599 15:28:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.599 15:28:54 -- accel/accel.sh@19 -- # IFS=: 00:05:53.599 15:28:54 -- accel/accel.sh@19 -- # read -r var val 00:05:53.599 15:28:54 -- accel/accel.sh@20 -- # val=Yes 00:05:53.599 15:28:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.599 15:28:54 -- accel/accel.sh@19 -- # IFS=: 00:05:53.599 15:28:54 -- accel/accel.sh@19 -- # read -r var val 00:05:53.599 15:28:54 -- accel/accel.sh@20 -- # val= 00:05:53.599 15:28:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.599 15:28:54 -- accel/accel.sh@19 -- # IFS=: 00:05:53.599 15:28:54 -- accel/accel.sh@19 -- # read -r var val 00:05:53.599 15:28:54 -- accel/accel.sh@20 -- # val= 00:05:53.599 15:28:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.599 15:28:54 -- accel/accel.sh@19 -- # IFS=: 00:05:53.599 15:28:54 -- accel/accel.sh@19 -- # read -r var val 00:05:54.975 15:28:56 -- accel/accel.sh@20 -- # val= 00:05:54.975 15:28:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.975 15:28:56 -- accel/accel.sh@19 -- # IFS=: 00:05:54.975 15:28:56 -- accel/accel.sh@19 -- # read -r var val 00:05:54.975 15:28:56 -- accel/accel.sh@20 -- # val= 00:05:54.975 15:28:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.975 15:28:56 -- accel/accel.sh@19 -- # IFS=: 00:05:54.975 15:28:56 -- accel/accel.sh@19 -- # read -r var val 00:05:54.975 15:28:56 -- accel/accel.sh@20 -- # val= 00:05:54.975 15:28:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.975 15:28:56 -- accel/accel.sh@19 -- # IFS=: 00:05:54.975 15:28:56 -- accel/accel.sh@19 -- # read -r var val 00:05:54.975 15:28:56 -- accel/accel.sh@20 -- # val= 00:05:54.975 15:28:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.975 15:28:56 -- accel/accel.sh@19 -- # IFS=: 00:05:54.975 15:28:56 -- accel/accel.sh@19 -- # read -r var val 00:05:54.975 15:28:56 -- accel/accel.sh@20 -- # val= 00:05:54.975 15:28:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.975 15:28:56 -- accel/accel.sh@19 -- # IFS=: 00:05:54.975 15:28:56 -- accel/accel.sh@19 -- # read -r var val 00:05:54.975 15:28:56 -- accel/accel.sh@20 -- # val= 00:05:54.975 15:28:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.975 15:28:56 -- accel/accel.sh@19 -- # IFS=: 00:05:54.975 15:28:56 -- accel/accel.sh@19 -- # read -r var val 00:05:54.975 15:28:56 -- accel/accel.sh@20 -- # val= 00:05:54.975 15:28:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.975 15:28:56 -- accel/accel.sh@19 -- # IFS=: 00:05:54.975 15:28:56 -- accel/accel.sh@19 -- # read -r var val 00:05:54.975 15:28:56 -- accel/accel.sh@20 -- # val= 00:05:54.975 15:28:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.975 15:28:56 -- accel/accel.sh@19 -- # IFS=: 00:05:54.975 15:28:56 -- accel/accel.sh@19 -- # read -r var val 00:05:54.975 15:28:56 -- accel/accel.sh@20 -- # val= 00:05:54.975 15:28:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.975 15:28:56 -- accel/accel.sh@19 -- # IFS=: 00:05:54.975 15:28:56 -- accel/accel.sh@19 -- # read -r var val 00:05:54.975 15:28:56 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:54.975 15:28:56 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:54.975 15:28:56 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:54.975 00:05:54.975 real 0m1.683s 00:05:54.975 user 0m5.016s 00:05:54.975 sys 0m0.164s 00:05:54.975 15:28:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:54.975 15:28:56 -- common/autotest_common.sh@10 -- # set +x 00:05:54.975 ************************************ 00:05:54.975 END TEST accel_decomp_full_mcore 00:05:54.975 ************************************ 00:05:54.975 15:28:56 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:54.975 15:28:56 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:05:54.975 15:28:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.975 15:28:56 -- common/autotest_common.sh@10 -- # set +x 00:05:54.975 ************************************ 00:05:54.975 START TEST accel_decomp_mthread 00:05:54.975 ************************************ 00:05:54.975 15:28:56 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:54.976 15:28:56 -- accel/accel.sh@16 -- # local accel_opc 00:05:54.976 15:28:56 -- accel/accel.sh@17 -- # local accel_module 00:05:54.976 15:28:56 -- accel/accel.sh@19 -- # IFS=: 00:05:54.976 15:28:56 -- accel/accel.sh@19 -- # read -r var val 00:05:54.976 15:28:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:54.976 15:28:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:54.976 15:28:56 -- accel/accel.sh@12 -- # build_accel_config 00:05:54.976 15:28:56 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:54.976 15:28:56 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:54.976 15:28:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.976 15:28:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.976 15:28:56 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:54.976 15:28:56 -- accel/accel.sh@40 -- # local IFS=, 00:05:54.976 15:28:56 -- accel/accel.sh@41 -- # jq -r . 00:05:54.976 [2024-04-17 15:28:56.257188] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:05:54.976 [2024-04-17 15:28:56.257272] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61655 ] 00:05:54.976 [2024-04-17 15:28:56.394884] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.234 [2024-04-17 15:28:56.548039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.234 15:28:56 -- accel/accel.sh@20 -- # val= 00:05:55.234 15:28:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.234 15:28:56 -- accel/accel.sh@19 -- # IFS=: 00:05:55.234 15:28:56 -- accel/accel.sh@19 -- # read -r var val 00:05:55.234 15:28:56 -- accel/accel.sh@20 -- # val= 00:05:55.234 15:28:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.234 15:28:56 -- accel/accel.sh@19 -- # IFS=: 00:05:55.234 15:28:56 -- accel/accel.sh@19 -- # read -r var val 00:05:55.234 15:28:56 -- accel/accel.sh@20 -- # val= 00:05:55.234 15:28:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.234 15:28:56 -- accel/accel.sh@19 -- # IFS=: 00:05:55.234 15:28:56 -- accel/accel.sh@19 -- # read -r var val 00:05:55.234 15:28:56 -- accel/accel.sh@20 -- # val=0x1 00:05:55.234 15:28:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.234 15:28:56 -- accel/accel.sh@19 -- # IFS=: 00:05:55.234 15:28:56 -- accel/accel.sh@19 -- # read -r var val 00:05:55.234 15:28:56 -- accel/accel.sh@20 -- # val= 00:05:55.234 15:28:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.234 15:28:56 -- accel/accel.sh@19 -- # IFS=: 00:05:55.234 15:28:56 -- accel/accel.sh@19 -- # read -r var val 00:05:55.234 15:28:56 -- accel/accel.sh@20 -- # val= 00:05:55.234 15:28:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.234 15:28:56 -- accel/accel.sh@19 -- # IFS=: 00:05:55.234 15:28:56 -- accel/accel.sh@19 -- # read -r var val 00:05:55.234 15:28:56 -- accel/accel.sh@20 -- # val=decompress 00:05:55.234 15:28:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.234 15:28:56 -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:55.234 15:28:56 -- accel/accel.sh@19 -- # IFS=: 00:05:55.234 15:28:56 -- accel/accel.sh@19 -- # read -r var val 00:05:55.234 15:28:56 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:55.234 15:28:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.234 15:28:56 -- accel/accel.sh@19 -- # IFS=: 00:05:55.234 15:28:56 -- accel/accel.sh@19 -- # read -r var val 00:05:55.234 15:28:56 -- accel/accel.sh@20 -- # val= 00:05:55.234 15:28:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.234 15:28:56 -- accel/accel.sh@19 -- # IFS=: 00:05:55.234 15:28:56 -- accel/accel.sh@19 -- # read -r var val 00:05:55.234 15:28:56 -- accel/accel.sh@20 -- # val=software 00:05:55.234 15:28:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.234 15:28:56 -- accel/accel.sh@22 -- # accel_module=software 00:05:55.234 15:28:56 -- accel/accel.sh@19 -- # IFS=: 00:05:55.234 15:28:56 -- accel/accel.sh@19 -- # read -r var val 00:05:55.234 15:28:56 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:55.234 15:28:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.234 15:28:56 -- accel/accel.sh@19 -- # IFS=: 00:05:55.234 15:28:56 -- accel/accel.sh@19 -- # read -r var val 00:05:55.234 15:28:56 -- accel/accel.sh@20 -- # val=32 00:05:55.234 15:28:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.234 15:28:56 -- accel/accel.sh@19 -- # IFS=: 00:05:55.234 15:28:56 -- accel/accel.sh@19 -- # read -r var val 00:05:55.234 15:28:56 -- accel/accel.sh@20 -- # val=32 00:05:55.234 15:28:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.234 15:28:56 -- accel/accel.sh@19 -- # IFS=: 00:05:55.234 15:28:56 -- accel/accel.sh@19 -- # read -r var val 00:05:55.234 15:28:56 -- accel/accel.sh@20 -- # val=2 00:05:55.234 15:28:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.234 15:28:56 -- accel/accel.sh@19 -- # IFS=: 00:05:55.234 15:28:56 -- accel/accel.sh@19 -- # read -r var val 00:05:55.234 15:28:56 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:55.234 15:28:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.234 15:28:56 -- accel/accel.sh@19 -- # IFS=: 00:05:55.234 15:28:56 -- accel/accel.sh@19 -- # read -r var val 00:05:55.234 15:28:56 -- accel/accel.sh@20 -- # val=Yes 00:05:55.234 15:28:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.234 15:28:56 -- accel/accel.sh@19 -- # IFS=: 00:05:55.234 15:28:56 -- accel/accel.sh@19 -- # read -r var val 00:05:55.234 15:28:56 -- accel/accel.sh@20 -- # val= 00:05:55.234 15:28:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.234 15:28:56 -- accel/accel.sh@19 -- # IFS=: 00:05:55.234 15:28:56 -- accel/accel.sh@19 -- # read -r var val 00:05:55.234 15:28:56 -- accel/accel.sh@20 -- # val= 00:05:55.234 15:28:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.234 15:28:56 -- accel/accel.sh@19 -- # IFS=: 00:05:55.234 15:28:56 -- accel/accel.sh@19 -- # read -r var val 00:05:56.613 15:28:57 -- accel/accel.sh@20 -- # val= 00:05:56.613 15:28:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.613 15:28:57 -- accel/accel.sh@19 -- # IFS=: 00:05:56.613 15:28:57 -- accel/accel.sh@19 -- # read -r var val 00:05:56.613 15:28:57 -- accel/accel.sh@20 -- # val= 00:05:56.613 15:28:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.613 15:28:57 -- accel/accel.sh@19 -- # IFS=: 00:05:56.613 15:28:57 -- accel/accel.sh@19 -- # read -r var val 00:05:56.613 15:28:57 -- accel/accel.sh@20 -- # val= 00:05:56.613 15:28:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.613 15:28:57 -- accel/accel.sh@19 -- # IFS=: 00:05:56.613 15:28:57 -- accel/accel.sh@19 -- # read -r var val 00:05:56.613 15:28:57 -- accel/accel.sh@20 -- # val= 00:05:56.613 15:28:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.613 15:28:57 -- accel/accel.sh@19 -- # IFS=: 00:05:56.613 15:28:57 -- accel/accel.sh@19 -- # read -r var val 00:05:56.613 15:28:57 -- accel/accel.sh@20 -- # val= 00:05:56.613 15:28:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.613 15:28:57 -- accel/accel.sh@19 -- # IFS=: 00:05:56.613 15:28:57 -- accel/accel.sh@19 -- # read -r var val 00:05:56.613 ************************************ 00:05:56.613 END TEST accel_decomp_mthread 00:05:56.613 ************************************ 00:05:56.613 15:28:57 -- accel/accel.sh@20 -- # val= 00:05:56.613 15:28:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.613 15:28:57 -- accel/accel.sh@19 -- # IFS=: 00:05:56.613 15:28:57 -- accel/accel.sh@19 -- # read -r var val 00:05:56.613 15:28:57 -- accel/accel.sh@20 -- # val= 00:05:56.613 15:28:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.613 15:28:57 -- accel/accel.sh@19 -- # IFS=: 00:05:56.613 15:28:57 -- accel/accel.sh@19 -- # read -r var val 00:05:56.613 15:28:57 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:56.613 15:28:57 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:56.613 15:28:57 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:56.613 00:05:56.613 real 0m1.670s 00:05:56.613 user 0m1.427s 00:05:56.613 sys 0m0.146s 00:05:56.613 15:28:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:56.613 15:28:57 -- common/autotest_common.sh@10 -- # set +x 00:05:56.613 15:28:57 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:56.613 15:28:57 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:05:56.613 15:28:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.613 15:28:57 -- common/autotest_common.sh@10 -- # set +x 00:05:56.613 ************************************ 00:05:56.613 START TEST accel_deomp_full_mthread 00:05:56.613 ************************************ 00:05:56.613 15:28:58 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:56.613 15:28:58 -- accel/accel.sh@16 -- # local accel_opc 00:05:56.613 15:28:58 -- accel/accel.sh@17 -- # local accel_module 00:05:56.613 15:28:58 -- accel/accel.sh@19 -- # IFS=: 00:05:56.613 15:28:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:56.613 15:28:58 -- accel/accel.sh@19 -- # read -r var val 00:05:56.613 15:28:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:56.613 15:28:58 -- accel/accel.sh@12 -- # build_accel_config 00:05:56.613 15:28:58 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:56.613 15:28:58 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:56.613 15:28:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.613 15:28:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.613 15:28:58 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:56.613 15:28:58 -- accel/accel.sh@40 -- # local IFS=, 00:05:56.613 15:28:58 -- accel/accel.sh@41 -- # jq -r . 00:05:56.873 [2024-04-17 15:28:58.064278] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:05:56.873 [2024-04-17 15:28:58.064439] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61699 ] 00:05:56.873 [2024-04-17 15:28:58.204314] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.133 [2024-04-17 15:28:58.355676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.133 15:28:58 -- accel/accel.sh@20 -- # val= 00:05:57.133 15:28:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.133 15:28:58 -- accel/accel.sh@19 -- # IFS=: 00:05:57.133 15:28:58 -- accel/accel.sh@19 -- # read -r var val 00:05:57.133 15:28:58 -- accel/accel.sh@20 -- # val= 00:05:57.133 15:28:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.133 15:28:58 -- accel/accel.sh@19 -- # IFS=: 00:05:57.133 15:28:58 -- accel/accel.sh@19 -- # read -r var val 00:05:57.133 15:28:58 -- accel/accel.sh@20 -- # val= 00:05:57.133 15:28:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.133 15:28:58 -- accel/accel.sh@19 -- # IFS=: 00:05:57.133 15:28:58 -- accel/accel.sh@19 -- # read -r var val 00:05:57.133 15:28:58 -- accel/accel.sh@20 -- # val=0x1 00:05:57.133 15:28:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.133 15:28:58 -- accel/accel.sh@19 -- # IFS=: 00:05:57.133 15:28:58 -- accel/accel.sh@19 -- # read -r var val 00:05:57.133 15:28:58 -- accel/accel.sh@20 -- # val= 00:05:57.133 15:28:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.133 15:28:58 -- accel/accel.sh@19 -- # IFS=: 00:05:57.133 15:28:58 -- accel/accel.sh@19 -- # read -r var val 00:05:57.133 15:28:58 -- accel/accel.sh@20 -- # val= 00:05:57.133 15:28:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.133 15:28:58 -- accel/accel.sh@19 -- # IFS=: 00:05:57.133 15:28:58 -- accel/accel.sh@19 -- # read -r var val 00:05:57.133 15:28:58 -- accel/accel.sh@20 -- # val=decompress 00:05:57.133 15:28:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.133 15:28:58 -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:57.133 15:28:58 -- accel/accel.sh@19 -- # IFS=: 00:05:57.133 15:28:58 -- accel/accel.sh@19 -- # read -r var val 00:05:57.133 15:28:58 -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:57.133 15:28:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.133 15:28:58 -- accel/accel.sh@19 -- # IFS=: 00:05:57.133 15:28:58 -- accel/accel.sh@19 -- # read -r var val 00:05:57.133 15:28:58 -- accel/accel.sh@20 -- # val= 00:05:57.133 15:28:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.133 15:28:58 -- accel/accel.sh@19 -- # IFS=: 00:05:57.133 15:28:58 -- accel/accel.sh@19 -- # read -r var val 00:05:57.133 15:28:58 -- accel/accel.sh@20 -- # val=software 00:05:57.133 15:28:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.133 15:28:58 -- accel/accel.sh@22 -- # accel_module=software 00:05:57.133 15:28:58 -- accel/accel.sh@19 -- # IFS=: 00:05:57.133 15:28:58 -- accel/accel.sh@19 -- # read -r var val 00:05:57.133 15:28:58 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:57.133 15:28:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.133 15:28:58 -- accel/accel.sh@19 -- # IFS=: 00:05:57.133 15:28:58 -- accel/accel.sh@19 -- # read -r var val 00:05:57.133 15:28:58 -- accel/accel.sh@20 -- # val=32 00:05:57.133 15:28:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.133 15:28:58 -- accel/accel.sh@19 -- # IFS=: 00:05:57.133 15:28:58 -- accel/accel.sh@19 -- # read -r var val 00:05:57.133 15:28:58 -- accel/accel.sh@20 -- # val=32 00:05:57.133 15:28:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.133 15:28:58 -- accel/accel.sh@19 -- # IFS=: 00:05:57.133 15:28:58 -- accel/accel.sh@19 -- # read -r var val 00:05:57.133 15:28:58 -- accel/accel.sh@20 -- # val=2 00:05:57.133 15:28:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.133 15:28:58 -- accel/accel.sh@19 -- # IFS=: 00:05:57.133 15:28:58 -- accel/accel.sh@19 -- # read -r var val 00:05:57.133 15:28:58 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:57.133 15:28:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.133 15:28:58 -- accel/accel.sh@19 -- # IFS=: 00:05:57.133 15:28:58 -- accel/accel.sh@19 -- # read -r var val 00:05:57.133 15:28:58 -- accel/accel.sh@20 -- # val=Yes 00:05:57.133 15:28:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.133 15:28:58 -- accel/accel.sh@19 -- # IFS=: 00:05:57.133 15:28:58 -- accel/accel.sh@19 -- # read -r var val 00:05:57.133 15:28:58 -- accel/accel.sh@20 -- # val= 00:05:57.133 15:28:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.133 15:28:58 -- accel/accel.sh@19 -- # IFS=: 00:05:57.133 15:28:58 -- accel/accel.sh@19 -- # read -r var val 00:05:57.133 15:28:58 -- accel/accel.sh@20 -- # val= 00:05:57.133 15:28:58 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.133 15:28:58 -- accel/accel.sh@19 -- # IFS=: 00:05:57.133 15:28:58 -- accel/accel.sh@19 -- # read -r var val 00:05:58.509 15:28:59 -- accel/accel.sh@20 -- # val= 00:05:58.509 15:28:59 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.509 15:28:59 -- accel/accel.sh@19 -- # IFS=: 00:05:58.509 15:28:59 -- accel/accel.sh@19 -- # read -r var val 00:05:58.509 15:28:59 -- accel/accel.sh@20 -- # val= 00:05:58.509 15:28:59 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.509 15:28:59 -- accel/accel.sh@19 -- # IFS=: 00:05:58.509 15:28:59 -- accel/accel.sh@19 -- # read -r var val 00:05:58.509 15:28:59 -- accel/accel.sh@20 -- # val= 00:05:58.509 15:28:59 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.509 15:28:59 -- accel/accel.sh@19 -- # IFS=: 00:05:58.509 15:28:59 -- accel/accel.sh@19 -- # read -r var val 00:05:58.510 15:28:59 -- accel/accel.sh@20 -- # val= 00:05:58.510 15:28:59 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.510 15:28:59 -- accel/accel.sh@19 -- # IFS=: 00:05:58.510 15:28:59 -- accel/accel.sh@19 -- # read -r var val 00:05:58.510 15:28:59 -- accel/accel.sh@20 -- # val= 00:05:58.510 15:28:59 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.510 15:28:59 -- accel/accel.sh@19 -- # IFS=: 00:05:58.510 15:28:59 -- accel/accel.sh@19 -- # read -r var val 00:05:58.510 15:28:59 -- accel/accel.sh@20 -- # val= 00:05:58.510 15:28:59 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.510 15:28:59 -- accel/accel.sh@19 -- # IFS=: 00:05:58.510 15:28:59 -- accel/accel.sh@19 -- # read -r var val 00:05:58.510 15:28:59 -- accel/accel.sh@20 -- # val= 00:05:58.510 15:28:59 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.510 15:28:59 -- accel/accel.sh@19 -- # IFS=: 00:05:58.510 15:28:59 -- accel/accel.sh@19 -- # read -r var val 00:05:58.510 15:28:59 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:58.510 15:28:59 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:58.510 15:28:59 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:58.510 00:05:58.510 real 0m1.713s 00:05:58.510 user 0m1.452s 00:05:58.510 sys 0m0.161s 00:05:58.510 15:28:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:58.510 ************************************ 00:05:58.510 END TEST accel_deomp_full_mthread 00:05:58.510 ************************************ 00:05:58.510 15:28:59 -- common/autotest_common.sh@10 -- # set +x 00:05:58.510 15:28:59 -- accel/accel.sh@124 -- # [[ n == y ]] 00:05:58.510 15:28:59 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:58.510 15:28:59 -- accel/accel.sh@137 -- # build_accel_config 00:05:58.510 15:28:59 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:58.510 15:28:59 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:58.510 15:28:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.510 15:28:59 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:58.510 15:28:59 -- common/autotest_common.sh@10 -- # set +x 00:05:58.510 15:28:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.510 15:28:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.510 15:28:59 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:58.510 15:28:59 -- accel/accel.sh@40 -- # local IFS=, 00:05:58.510 15:28:59 -- accel/accel.sh@41 -- # jq -r . 00:05:58.510 ************************************ 00:05:58.510 START TEST accel_dif_functional_tests 00:05:58.510 ************************************ 00:05:58.510 15:28:59 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:58.510 [2024-04-17 15:28:59.937099] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:05:58.510 [2024-04-17 15:28:59.937563] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61744 ] 00:05:58.768 [2024-04-17 15:29:00.077947] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:59.027 [2024-04-17 15:29:00.229987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.027 [2024-04-17 15:29:00.230201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:59.027 [2024-04-17 15:29:00.230209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.027 00:05:59.028 00:05:59.028 CUnit - A unit testing framework for C - Version 2.1-3 00:05:59.028 http://cunit.sourceforge.net/ 00:05:59.028 00:05:59.028 00:05:59.028 Suite: accel_dif 00:05:59.028 Test: verify: DIF generated, GUARD check ...passed 00:05:59.028 Test: verify: DIF generated, APPTAG check ...passed 00:05:59.028 Test: verify: DIF generated, REFTAG check ...passed 00:05:59.028 Test: verify: DIF not generated, GUARD check ...passed 00:05:59.028 Test: verify: DIF not generated, APPTAG check ...passed 00:05:59.028 Test: verify: DIF not generated, REFTAG check ...passed 00:05:59.028 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:59.028 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:05:59.028 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:59.028 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:59.028 Test: verify: REFTAG_INIT correct, REFTAG check ...[2024-04-17 15:29:00.358021] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:59.028 [2024-04-17 15:29:00.358119] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:59.028 [2024-04-17 15:29:00.358160] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:59.028 [2024-04-17 15:29:00.358190] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:59.028 [2024-04-17 15:29:00.358218] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:59.028 [2024-04-17 15:29:00.358246] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:59.028 [2024-04-17 15:29:00.358308] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:59.028 passed 00:05:59.028 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:05:59.028 Test: generate copy: DIF generated, GUARD check ...passed 00:05:59.028 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:59.028 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:59.028 Test: generate copy: DIF generated, no GUARD check flag set ...[2024-04-17 15:29:00.358480] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:59.028 passed 00:05:59.028 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:59.028 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:59.028 Test: generate copy: iovecs-len validate ...passed 00:05:59.028 Test: generate copy: buffer alignment validate ...passed 00:05:59.028 00:05:59.028 Run Summary: Type Total Ran Passed Failed Inactive 00:05:59.028 suites 1 1 n/a 0 0 00:05:59.028 tests 20 20 20 0 0 00:05:59.028 asserts 204 204 204 0 n/a 00:05:59.028 00:05:59.028 Elapsed time = 0.004 seconds 00:05:59.028 [2024-04-17 15:29:00.358775] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:59.287 00:05:59.287 real 0m0.821s 00:05:59.287 user 0m1.093s 00:05:59.287 sys 0m0.209s 00:05:59.287 ************************************ 00:05:59.287 END TEST accel_dif_functional_tests 00:05:59.287 ************************************ 00:05:59.287 15:29:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:59.287 15:29:00 -- common/autotest_common.sh@10 -- # set +x 00:05:59.546 ************************************ 00:05:59.546 END TEST accel 00:05:59.546 ************************************ 00:05:59.546 00:05:59.546 real 0m40.553s 00:05:59.546 user 0m40.673s 00:05:59.546 sys 0m5.681s 00:05:59.546 15:29:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:59.546 15:29:00 -- common/autotest_common.sh@10 -- # set +x 00:05:59.546 15:29:00 -- spdk/autotest.sh@179 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:05:59.546 15:29:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:59.546 15:29:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.546 15:29:00 -- common/autotest_common.sh@10 -- # set +x 00:05:59.546 ************************************ 00:05:59.546 START TEST accel_rpc 00:05:59.546 ************************************ 00:05:59.546 15:29:00 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:05:59.546 * Looking for test storage... 00:05:59.546 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:59.546 15:29:00 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:59.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.546 15:29:00 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=61820 00:05:59.546 15:29:00 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:59.546 15:29:00 -- accel/accel_rpc.sh@15 -- # waitforlisten 61820 00:05:59.546 15:29:00 -- common/autotest_common.sh@817 -- # '[' -z 61820 ']' 00:05:59.546 15:29:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.546 15:29:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:59.546 15:29:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.546 15:29:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:59.546 15:29:00 -- common/autotest_common.sh@10 -- # set +x 00:05:59.805 [2024-04-17 15:29:01.058149] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:05:59.805 [2024-04-17 15:29:01.058790] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61820 ] 00:05:59.805 [2024-04-17 15:29:01.208677] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.063 [2024-04-17 15:29:01.356416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.633 15:29:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:00.633 15:29:02 -- common/autotest_common.sh@850 -- # return 0 00:06:00.633 15:29:02 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:00.633 15:29:02 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:00.633 15:29:02 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:00.633 15:29:02 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:00.633 15:29:02 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:00.633 15:29:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:00.633 15:29:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.633 15:29:02 -- common/autotest_common.sh@10 -- # set +x 00:06:00.891 ************************************ 00:06:00.891 START TEST accel_assign_opcode 00:06:00.891 ************************************ 00:06:00.891 15:29:02 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:06:00.891 15:29:02 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:00.891 15:29:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:00.891 15:29:02 -- common/autotest_common.sh@10 -- # set +x 00:06:00.892 [2024-04-17 15:29:02.157552] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:00.892 15:29:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:00.892 15:29:02 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:00.892 15:29:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:00.892 15:29:02 -- common/autotest_common.sh@10 -- # set +x 00:06:00.892 [2024-04-17 15:29:02.169549] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:00.892 15:29:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:00.892 15:29:02 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:00.892 15:29:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:00.892 15:29:02 -- common/autotest_common.sh@10 -- # set +x 00:06:01.150 15:29:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:01.150 15:29:02 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:01.150 15:29:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:01.150 15:29:02 -- accel/accel_rpc.sh@42 -- # grep software 00:06:01.150 15:29:02 -- common/autotest_common.sh@10 -- # set +x 00:06:01.150 15:29:02 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:01.150 15:29:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:01.150 software 00:06:01.150 ************************************ 00:06:01.150 END TEST accel_assign_opcode 00:06:01.150 ************************************ 00:06:01.150 00:06:01.150 real 0m0.387s 00:06:01.150 user 0m0.058s 00:06:01.150 sys 0m0.011s 00:06:01.150 15:29:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:01.150 15:29:02 -- common/autotest_common.sh@10 -- # set +x 00:06:01.150 15:29:02 -- accel/accel_rpc.sh@55 -- # killprocess 61820 00:06:01.150 15:29:02 -- common/autotest_common.sh@936 -- # '[' -z 61820 ']' 00:06:01.150 15:29:02 -- common/autotest_common.sh@940 -- # kill -0 61820 00:06:01.150 15:29:02 -- common/autotest_common.sh@941 -- # uname 00:06:01.150 15:29:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:01.150 15:29:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61820 00:06:01.409 killing process with pid 61820 00:06:01.409 15:29:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:01.409 15:29:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:01.409 15:29:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61820' 00:06:01.409 15:29:02 -- common/autotest_common.sh@955 -- # kill 61820 00:06:01.409 15:29:02 -- common/autotest_common.sh@960 -- # wait 61820 00:06:01.976 00:06:01.976 real 0m2.328s 00:06:01.976 user 0m2.399s 00:06:01.976 sys 0m0.597s 00:06:01.976 ************************************ 00:06:01.976 END TEST accel_rpc 00:06:01.976 ************************************ 00:06:01.976 15:29:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:01.976 15:29:03 -- common/autotest_common.sh@10 -- # set +x 00:06:01.976 15:29:03 -- spdk/autotest.sh@180 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:01.976 15:29:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:01.976 15:29:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.976 15:29:03 -- common/autotest_common.sh@10 -- # set +x 00:06:01.976 ************************************ 00:06:01.976 START TEST app_cmdline 00:06:01.976 ************************************ 00:06:01.976 15:29:03 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:01.976 * Looking for test storage... 00:06:02.235 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:02.235 15:29:03 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:02.235 15:29:03 -- app/cmdline.sh@17 -- # spdk_tgt_pid=61922 00:06:02.236 15:29:03 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:02.236 15:29:03 -- app/cmdline.sh@18 -- # waitforlisten 61922 00:06:02.236 15:29:03 -- common/autotest_common.sh@817 -- # '[' -z 61922 ']' 00:06:02.236 15:29:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.236 15:29:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:02.236 15:29:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.236 15:29:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:02.236 15:29:03 -- common/autotest_common.sh@10 -- # set +x 00:06:02.236 [2024-04-17 15:29:03.523235] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:02.236 [2024-04-17 15:29:03.523827] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61922 ] 00:06:02.236 [2024-04-17 15:29:03.673637] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.494 [2024-04-17 15:29:03.823556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.428 15:29:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:03.428 15:29:04 -- common/autotest_common.sh@850 -- # return 0 00:06:03.428 15:29:04 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:03.428 { 00:06:03.428 "version": "SPDK v24.05-pre git sha1 480afb9a1", 00:06:03.428 "fields": { 00:06:03.428 "major": 24, 00:06:03.428 "minor": 5, 00:06:03.428 "patch": 0, 00:06:03.428 "suffix": "-pre", 00:06:03.428 "commit": "480afb9a1" 00:06:03.428 } 00:06:03.428 } 00:06:03.428 15:29:04 -- app/cmdline.sh@22 -- # expected_methods=() 00:06:03.428 15:29:04 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:03.428 15:29:04 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:03.428 15:29:04 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:03.428 15:29:04 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:03.428 15:29:04 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:03.428 15:29:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:03.428 15:29:04 -- common/autotest_common.sh@10 -- # set +x 00:06:03.428 15:29:04 -- app/cmdline.sh@26 -- # sort 00:06:03.428 15:29:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:03.687 15:29:04 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:03.687 15:29:04 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:03.687 15:29:04 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:03.687 15:29:04 -- common/autotest_common.sh@638 -- # local es=0 00:06:03.687 15:29:04 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:03.687 15:29:04 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:03.687 15:29:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:03.687 15:29:04 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:03.687 15:29:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:03.687 15:29:04 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:03.687 15:29:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:03.687 15:29:04 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:03.687 15:29:04 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:03.687 15:29:04 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:03.946 request: 00:06:03.946 { 00:06:03.946 "method": "env_dpdk_get_mem_stats", 00:06:03.946 "req_id": 1 00:06:03.946 } 00:06:03.946 Got JSON-RPC error response 00:06:03.946 response: 00:06:03.946 { 00:06:03.946 "code": -32601, 00:06:03.946 "message": "Method not found" 00:06:03.946 } 00:06:03.946 15:29:05 -- common/autotest_common.sh@641 -- # es=1 00:06:03.946 15:29:05 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:03.946 15:29:05 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:03.946 15:29:05 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:03.946 15:29:05 -- app/cmdline.sh@1 -- # killprocess 61922 00:06:03.946 15:29:05 -- common/autotest_common.sh@936 -- # '[' -z 61922 ']' 00:06:03.946 15:29:05 -- common/autotest_common.sh@940 -- # kill -0 61922 00:06:03.946 15:29:05 -- common/autotest_common.sh@941 -- # uname 00:06:03.946 15:29:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:03.946 15:29:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61922 00:06:03.946 killing process with pid 61922 00:06:03.946 15:29:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:03.946 15:29:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:03.946 15:29:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61922' 00:06:03.946 15:29:05 -- common/autotest_common.sh@955 -- # kill 61922 00:06:03.946 15:29:05 -- common/autotest_common.sh@960 -- # wait 61922 00:06:04.514 00:06:04.514 real 0m2.491s 00:06:04.514 user 0m2.972s 00:06:04.514 sys 0m0.629s 00:06:04.514 15:29:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:04.514 15:29:05 -- common/autotest_common.sh@10 -- # set +x 00:06:04.514 ************************************ 00:06:04.514 END TEST app_cmdline 00:06:04.514 ************************************ 00:06:04.514 15:29:05 -- spdk/autotest.sh@181 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:04.514 15:29:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:04.514 15:29:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.514 15:29:05 -- common/autotest_common.sh@10 -- # set +x 00:06:04.514 ************************************ 00:06:04.514 START TEST version 00:06:04.514 ************************************ 00:06:04.514 15:29:05 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:04.773 * Looking for test storage... 00:06:04.773 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:04.773 15:29:06 -- app/version.sh@17 -- # get_header_version major 00:06:04.773 15:29:06 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:04.773 15:29:06 -- app/version.sh@14 -- # cut -f2 00:06:04.773 15:29:06 -- app/version.sh@14 -- # tr -d '"' 00:06:04.773 15:29:06 -- app/version.sh@17 -- # major=24 00:06:04.773 15:29:06 -- app/version.sh@18 -- # get_header_version minor 00:06:04.773 15:29:06 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:04.773 15:29:06 -- app/version.sh@14 -- # cut -f2 00:06:04.773 15:29:06 -- app/version.sh@14 -- # tr -d '"' 00:06:04.773 15:29:06 -- app/version.sh@18 -- # minor=5 00:06:04.773 15:29:06 -- app/version.sh@19 -- # get_header_version patch 00:06:04.773 15:29:06 -- app/version.sh@14 -- # cut -f2 00:06:04.773 15:29:06 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:04.773 15:29:06 -- app/version.sh@14 -- # tr -d '"' 00:06:04.773 15:29:06 -- app/version.sh@19 -- # patch=0 00:06:04.773 15:29:06 -- app/version.sh@20 -- # get_header_version suffix 00:06:04.773 15:29:06 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:04.773 15:29:06 -- app/version.sh@14 -- # cut -f2 00:06:04.773 15:29:06 -- app/version.sh@14 -- # tr -d '"' 00:06:04.773 15:29:06 -- app/version.sh@20 -- # suffix=-pre 00:06:04.773 15:29:06 -- app/version.sh@22 -- # version=24.5 00:06:04.773 15:29:06 -- app/version.sh@25 -- # (( patch != 0 )) 00:06:04.773 15:29:06 -- app/version.sh@28 -- # version=24.5rc0 00:06:04.773 15:29:06 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:04.773 15:29:06 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:04.773 15:29:06 -- app/version.sh@30 -- # py_version=24.5rc0 00:06:04.773 15:29:06 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:06:04.773 00:06:04.773 real 0m0.160s 00:06:04.773 user 0m0.088s 00:06:04.773 sys 0m0.104s 00:06:04.773 ************************************ 00:06:04.773 END TEST version 00:06:04.774 ************************************ 00:06:04.774 15:29:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:04.774 15:29:06 -- common/autotest_common.sh@10 -- # set +x 00:06:04.774 15:29:06 -- spdk/autotest.sh@183 -- # '[' 0 -eq 1 ']' 00:06:04.774 15:29:06 -- spdk/autotest.sh@193 -- # uname -s 00:06:04.774 15:29:06 -- spdk/autotest.sh@193 -- # [[ Linux == Linux ]] 00:06:04.774 15:29:06 -- spdk/autotest.sh@194 -- # [[ 0 -eq 1 ]] 00:06:04.774 15:29:06 -- spdk/autotest.sh@194 -- # [[ 1 -eq 1 ]] 00:06:04.774 15:29:06 -- spdk/autotest.sh@200 -- # [[ 0 -eq 0 ]] 00:06:04.774 15:29:06 -- spdk/autotest.sh@201 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:04.774 15:29:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:04.774 15:29:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.774 15:29:06 -- common/autotest_common.sh@10 -- # set +x 00:06:05.034 ************************************ 00:06:05.034 START TEST spdk_dd 00:06:05.034 ************************************ 00:06:05.034 15:29:06 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:05.034 * Looking for test storage... 00:06:05.034 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:05.034 15:29:06 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:05.034 15:29:06 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:05.034 15:29:06 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:05.034 15:29:06 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:05.034 15:29:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.034 15:29:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.034 15:29:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.034 15:29:06 -- paths/export.sh@5 -- # export PATH 00:06:05.034 15:29:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.034 15:29:06 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:05.298 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:05.298 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:05.298 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:05.298 15:29:06 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:06:05.298 15:29:06 -- dd/dd.sh@11 -- # nvme_in_userspace 00:06:05.298 15:29:06 -- scripts/common.sh@309 -- # local bdf bdfs 00:06:05.298 15:29:06 -- scripts/common.sh@310 -- # local nvmes 00:06:05.298 15:29:06 -- scripts/common.sh@312 -- # [[ -n '' ]] 00:06:05.298 15:29:06 -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:06:05.298 15:29:06 -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:06:05.298 15:29:06 -- scripts/common.sh@295 -- # local bdf= 00:06:05.298 15:29:06 -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:06:05.298 15:29:06 -- scripts/common.sh@230 -- # local class 00:06:05.298 15:29:06 -- scripts/common.sh@231 -- # local subclass 00:06:05.298 15:29:06 -- scripts/common.sh@232 -- # local progif 00:06:05.298 15:29:06 -- scripts/common.sh@233 -- # printf %02x 1 00:06:05.298 15:29:06 -- scripts/common.sh@233 -- # class=01 00:06:05.298 15:29:06 -- scripts/common.sh@234 -- # printf %02x 8 00:06:05.298 15:29:06 -- scripts/common.sh@234 -- # subclass=08 00:06:05.298 15:29:06 -- scripts/common.sh@235 -- # printf %02x 2 00:06:05.298 15:29:06 -- scripts/common.sh@235 -- # progif=02 00:06:05.298 15:29:06 -- scripts/common.sh@237 -- # hash lspci 00:06:05.298 15:29:06 -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:06:05.298 15:29:06 -- scripts/common.sh@240 -- # grep -i -- -p02 00:06:05.298 15:29:06 -- scripts/common.sh@239 -- # lspci -mm -n -D 00:06:05.298 15:29:06 -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:06:05.298 15:29:06 -- scripts/common.sh@242 -- # tr -d '"' 00:06:05.298 15:29:06 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:05.298 15:29:06 -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:06:05.298 15:29:06 -- scripts/common.sh@15 -- # local i 00:06:05.298 15:29:06 -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:06:05.298 15:29:06 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:05.298 15:29:06 -- scripts/common.sh@24 -- # return 0 00:06:05.298 15:29:06 -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:06:05.298 15:29:06 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:05.298 15:29:06 -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:06:05.298 15:29:06 -- scripts/common.sh@15 -- # local i 00:06:05.298 15:29:06 -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:06:05.298 15:29:06 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:05.298 15:29:06 -- scripts/common.sh@24 -- # return 0 00:06:05.298 15:29:06 -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:06:05.298 15:29:06 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:06:05.298 15:29:06 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:06:05.569 15:29:06 -- scripts/common.sh@320 -- # uname -s 00:06:05.569 15:29:06 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:06:05.570 15:29:06 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:06:05.570 15:29:06 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:06:05.570 15:29:06 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:06:05.570 15:29:06 -- scripts/common.sh@320 -- # uname -s 00:06:05.570 15:29:06 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:06:05.570 15:29:06 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:06:05.570 15:29:06 -- scripts/common.sh@325 -- # (( 2 )) 00:06:05.570 15:29:06 -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:05.570 15:29:06 -- dd/dd.sh@13 -- # check_liburing 00:06:05.570 15:29:06 -- dd/common.sh@139 -- # local lib so 00:06:05.570 15:29:06 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:06:05.570 15:29:06 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_rdma.so.6.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.14.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_event.so.13.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_bdev.so.15.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_accel.so.15.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_sock.so.9.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_util.so.9.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:05.570 15:29:06 -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:06:05.570 15:29:06 -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:06:05.570 * spdk_dd linked to liburing 00:06:05.570 15:29:06 -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:05.570 15:29:06 -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:05.570 15:29:06 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:05.570 15:29:06 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:05.570 15:29:06 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:05.570 15:29:06 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:05.570 15:29:06 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:05.570 15:29:06 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:05.570 15:29:06 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:05.570 15:29:06 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:05.570 15:29:06 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:05.570 15:29:06 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:05.570 15:29:06 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:05.570 15:29:06 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:05.570 15:29:06 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:05.570 15:29:06 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:05.570 15:29:06 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:05.570 15:29:06 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:05.570 15:29:06 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:05.570 15:29:06 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:05.570 15:29:06 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:05.570 15:29:06 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:05.570 15:29:06 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:05.570 15:29:06 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:05.570 15:29:06 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:05.570 15:29:06 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:05.570 15:29:06 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:05.570 15:29:06 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:05.570 15:29:06 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:05.570 15:29:06 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:05.570 15:29:06 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:05.570 15:29:06 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:05.570 15:29:06 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:05.570 15:29:06 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:05.570 15:29:06 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:05.570 15:29:06 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:05.570 15:29:06 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:05.570 15:29:06 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:05.570 15:29:06 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:05.570 15:29:06 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:05.570 15:29:06 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:05.570 15:29:06 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:05.570 15:29:06 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:05.570 15:29:06 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:05.570 15:29:06 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:05.570 15:29:06 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:05.570 15:29:06 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:05.570 15:29:06 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:06:05.570 15:29:06 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:06:05.570 15:29:06 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:05.570 15:29:06 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:06:05.570 15:29:06 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:06:05.570 15:29:06 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:06:05.570 15:29:06 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:06:05.570 15:29:06 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:06:05.570 15:29:06 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=y 00:06:05.570 15:29:06 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:06:05.570 15:29:06 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:06:05.570 15:29:06 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:06:05.570 15:29:06 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:06:05.570 15:29:06 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:06:05.570 15:29:06 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:06:05.570 15:29:06 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:06:05.570 15:29:06 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:06:05.570 15:29:06 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:06:05.570 15:29:06 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:06:05.570 15:29:06 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:06:05.570 15:29:06 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:06:05.570 15:29:06 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:06:05.570 15:29:06 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:05.570 15:29:06 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:06:05.570 15:29:06 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:06:05.571 15:29:06 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:06:05.571 15:29:06 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:06:05.571 15:29:06 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:06:05.571 15:29:06 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:06:05.571 15:29:06 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:06:05.571 15:29:06 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:06:05.571 15:29:06 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:06:05.571 15:29:06 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:06:05.571 15:29:06 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:06:05.571 15:29:06 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:05.571 15:29:06 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:06:05.571 15:29:06 -- common/build_config.sh@82 -- # CONFIG_URING=y 00:06:05.571 15:29:06 -- dd/common.sh@149 -- # [[ y != y ]] 00:06:05.571 15:29:06 -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:06:05.571 15:29:06 -- dd/common.sh@156 -- # export liburing_in_use=1 00:06:05.571 15:29:06 -- dd/common.sh@156 -- # liburing_in_use=1 00:06:05.571 15:29:06 -- dd/common.sh@157 -- # return 0 00:06:05.571 15:29:06 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:06:05.571 15:29:06 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:05.571 15:29:06 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:05.571 15:29:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.571 15:29:06 -- common/autotest_common.sh@10 -- # set +x 00:06:05.571 ************************************ 00:06:05.571 START TEST spdk_dd_basic_rw 00:06:05.571 ************************************ 00:06:05.571 15:29:06 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:05.571 * Looking for test storage... 00:06:05.571 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:05.571 15:29:06 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:05.571 15:29:06 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:05.571 15:29:06 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:05.571 15:29:06 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:05.571 15:29:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.571 15:29:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.571 15:29:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.571 15:29:06 -- paths/export.sh@5 -- # export PATH 00:06:05.571 15:29:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.571 15:29:06 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:06:05.571 15:29:06 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:06:05.571 15:29:06 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:06:05.571 15:29:06 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:06:05.571 15:29:06 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:06:05.571 15:29:06 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:05.571 15:29:06 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:05.571 15:29:06 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:05.571 15:29:06 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:05.571 15:29:06 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:06:05.571 15:29:06 -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:06:05.571 15:29:06 -- dd/common.sh@126 -- # mapfile -t id 00:06:05.571 15:29:06 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:06:05.832 15:29:07 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:06:05.832 15:29:07 -- dd/common.sh@130 -- # lbaf=04 00:06:05.833 15:29:07 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:06:05.833 15:29:07 -- dd/common.sh@132 -- # lbaf=4096 00:06:05.833 15:29:07 -- dd/common.sh@134 -- # echo 4096 00:06:05.833 15:29:07 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:06:05.833 15:29:07 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:05.833 15:29:07 -- dd/basic_rw.sh@96 -- # : 00:06:05.833 15:29:07 -- dd/basic_rw.sh@96 -- # gen_conf 00:06:05.833 15:29:07 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:05.833 15:29:07 -- dd/common.sh@31 -- # xtrace_disable 00:06:05.833 15:29:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.833 15:29:07 -- common/autotest_common.sh@10 -- # set +x 00:06:05.833 15:29:07 -- common/autotest_common.sh@10 -- # set +x 00:06:05.833 { 00:06:05.833 "subsystems": [ 00:06:05.833 { 00:06:05.833 "subsystem": "bdev", 00:06:05.833 "config": [ 00:06:05.833 { 00:06:05.833 "params": { 00:06:05.833 "trtype": "pcie", 00:06:05.833 "traddr": "0000:00:10.0", 00:06:05.833 "name": "Nvme0" 00:06:05.833 }, 00:06:05.833 "method": "bdev_nvme_attach_controller" 00:06:05.833 }, 00:06:05.833 { 00:06:05.833 "method": "bdev_wait_for_examine" 00:06:05.833 } 00:06:05.833 ] 00:06:05.833 } 00:06:05.833 ] 00:06:05.833 } 00:06:05.833 ************************************ 00:06:05.833 START TEST dd_bs_lt_native_bs 00:06:05.833 ************************************ 00:06:05.833 15:29:07 -- common/autotest_common.sh@1111 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:05.833 15:29:07 -- common/autotest_common.sh@638 -- # local es=0 00:06:05.833 15:29:07 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:05.833 15:29:07 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:05.833 15:29:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:05.833 15:29:07 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:05.833 15:29:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:05.833 15:29:07 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:05.833 15:29:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:05.833 15:29:07 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:05.833 15:29:07 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:05.833 15:29:07 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:06.092 [2024-04-17 15:29:07.321599] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:06.092 [2024-04-17 15:29:07.321703] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62271 ] 00:06:06.092 [2024-04-17 15:29:07.460164] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.350 [2024-04-17 15:29:07.590010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.350 [2024-04-17 15:29:07.789403] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:06:06.350 [2024-04-17 15:29:07.789490] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:06.609 [2024-04-17 15:29:07.961329] spdk_dd.c:1535:main: *ERROR*: Error occurred while performing copy 00:06:06.867 15:29:08 -- common/autotest_common.sh@641 -- # es=234 00:06:06.867 15:29:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:06.867 15:29:08 -- common/autotest_common.sh@650 -- # es=106 00:06:06.867 ************************************ 00:06:06.867 END TEST dd_bs_lt_native_bs 00:06:06.867 ************************************ 00:06:06.867 15:29:08 -- common/autotest_common.sh@651 -- # case "$es" in 00:06:06.867 15:29:08 -- common/autotest_common.sh@658 -- # es=1 00:06:06.867 15:29:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:06.867 00:06:06.867 real 0m0.864s 00:06:06.867 user 0m0.548s 00:06:06.867 sys 0m0.206s 00:06:06.867 15:29:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:06.868 15:29:08 -- common/autotest_common.sh@10 -- # set +x 00:06:06.868 15:29:08 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:06:06.868 15:29:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:06.868 15:29:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:06.868 15:29:08 -- common/autotest_common.sh@10 -- # set +x 00:06:06.868 ************************************ 00:06:06.868 START TEST dd_rw 00:06:06.868 ************************************ 00:06:06.868 15:29:08 -- common/autotest_common.sh@1111 -- # basic_rw 4096 00:06:06.868 15:29:08 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:06:06.868 15:29:08 -- dd/basic_rw.sh@12 -- # local count size 00:06:06.868 15:29:08 -- dd/basic_rw.sh@13 -- # local qds bss 00:06:06.868 15:29:08 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:06:06.868 15:29:08 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:06.868 15:29:08 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:06.868 15:29:08 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:06.868 15:29:08 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:06.868 15:29:08 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:06.868 15:29:08 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:06.868 15:29:08 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:06.868 15:29:08 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:06.868 15:29:08 -- dd/basic_rw.sh@23 -- # count=15 00:06:06.868 15:29:08 -- dd/basic_rw.sh@24 -- # count=15 00:06:06.868 15:29:08 -- dd/basic_rw.sh@25 -- # size=61440 00:06:06.868 15:29:08 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:06.868 15:29:08 -- dd/common.sh@98 -- # xtrace_disable 00:06:06.868 15:29:08 -- common/autotest_common.sh@10 -- # set +x 00:06:07.804 15:29:08 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:06:07.804 15:29:08 -- dd/basic_rw.sh@30 -- # gen_conf 00:06:07.804 15:29:08 -- dd/common.sh@31 -- # xtrace_disable 00:06:07.804 15:29:08 -- common/autotest_common.sh@10 -- # set +x 00:06:07.804 [2024-04-17 15:29:09.013385] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:07.804 [2024-04-17 15:29:09.013690] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62313 ] 00:06:07.804 { 00:06:07.804 "subsystems": [ 00:06:07.804 { 00:06:07.804 "subsystem": "bdev", 00:06:07.804 "config": [ 00:06:07.804 { 00:06:07.804 "params": { 00:06:07.804 "trtype": "pcie", 00:06:07.804 "traddr": "0000:00:10.0", 00:06:07.804 "name": "Nvme0" 00:06:07.804 }, 00:06:07.804 "method": "bdev_nvme_attach_controller" 00:06:07.804 }, 00:06:07.804 { 00:06:07.804 "method": "bdev_wait_for_examine" 00:06:07.804 } 00:06:07.804 ] 00:06:07.804 } 00:06:07.804 ] 00:06:07.804 } 00:06:07.804 [2024-04-17 15:29:09.152201] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.062 [2024-04-17 15:29:09.274919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.630  Copying: 60/60 [kB] (average 19 MBps) 00:06:08.630 00:06:08.630 15:29:09 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:06:08.630 15:29:09 -- dd/basic_rw.sh@37 -- # gen_conf 00:06:08.630 15:29:09 -- dd/common.sh@31 -- # xtrace_disable 00:06:08.630 15:29:09 -- common/autotest_common.sh@10 -- # set +x 00:06:08.630 [2024-04-17 15:29:09.889732] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:08.630 [2024-04-17 15:29:09.889868] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62332 ] 00:06:08.630 { 00:06:08.630 "subsystems": [ 00:06:08.630 { 00:06:08.630 "subsystem": "bdev", 00:06:08.630 "config": [ 00:06:08.630 { 00:06:08.630 "params": { 00:06:08.630 "trtype": "pcie", 00:06:08.630 "traddr": "0000:00:10.0", 00:06:08.630 "name": "Nvme0" 00:06:08.630 }, 00:06:08.630 "method": "bdev_nvme_attach_controller" 00:06:08.630 }, 00:06:08.630 { 00:06:08.630 "method": "bdev_wait_for_examine" 00:06:08.630 } 00:06:08.630 ] 00:06:08.630 } 00:06:08.630 ] 00:06:08.630 } 00:06:08.630 [2024-04-17 15:29:10.029151] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.889 [2024-04-17 15:29:10.177991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.408  Copying: 60/60 [kB] (average 19 MBps) 00:06:09.408 00:06:09.408 15:29:10 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:09.408 15:29:10 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:09.408 15:29:10 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:09.408 15:29:10 -- dd/common.sh@11 -- # local nvme_ref= 00:06:09.408 15:29:10 -- dd/common.sh@12 -- # local size=61440 00:06:09.408 15:29:10 -- dd/common.sh@14 -- # local bs=1048576 00:06:09.408 15:29:10 -- dd/common.sh@15 -- # local count=1 00:06:09.408 15:29:10 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:09.408 15:29:10 -- dd/common.sh@18 -- # gen_conf 00:06:09.408 15:29:10 -- dd/common.sh@31 -- # xtrace_disable 00:06:09.408 15:29:10 -- common/autotest_common.sh@10 -- # set +x 00:06:09.408 [2024-04-17 15:29:10.803719] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:09.408 [2024-04-17 15:29:10.804064] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62353 ] 00:06:09.408 { 00:06:09.408 "subsystems": [ 00:06:09.408 { 00:06:09.408 "subsystem": "bdev", 00:06:09.408 "config": [ 00:06:09.408 { 00:06:09.408 "params": { 00:06:09.408 "trtype": "pcie", 00:06:09.408 "traddr": "0000:00:10.0", 00:06:09.408 "name": "Nvme0" 00:06:09.408 }, 00:06:09.408 "method": "bdev_nvme_attach_controller" 00:06:09.408 }, 00:06:09.408 { 00:06:09.408 "method": "bdev_wait_for_examine" 00:06:09.408 } 00:06:09.408 ] 00:06:09.408 } 00:06:09.408 ] 00:06:09.408 } 00:06:09.667 [2024-04-17 15:29:10.939376] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.667 [2024-04-17 15:29:11.084005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.203  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:10.203 00:06:10.203 15:29:11 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:10.203 15:29:11 -- dd/basic_rw.sh@23 -- # count=15 00:06:10.203 15:29:11 -- dd/basic_rw.sh@24 -- # count=15 00:06:10.203 15:29:11 -- dd/basic_rw.sh@25 -- # size=61440 00:06:10.203 15:29:11 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:10.203 15:29:11 -- dd/common.sh@98 -- # xtrace_disable 00:06:10.203 15:29:11 -- common/autotest_common.sh@10 -- # set +x 00:06:11.139 15:29:12 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:06:11.139 15:29:12 -- dd/basic_rw.sh@30 -- # gen_conf 00:06:11.139 15:29:12 -- dd/common.sh@31 -- # xtrace_disable 00:06:11.139 15:29:12 -- common/autotest_common.sh@10 -- # set +x 00:06:11.139 [2024-04-17 15:29:12.318086] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:11.139 [2024-04-17 15:29:12.318186] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62372 ] 00:06:11.139 { 00:06:11.139 "subsystems": [ 00:06:11.139 { 00:06:11.139 "subsystem": "bdev", 00:06:11.139 "config": [ 00:06:11.139 { 00:06:11.139 "params": { 00:06:11.139 "trtype": "pcie", 00:06:11.139 "traddr": "0000:00:10.0", 00:06:11.139 "name": "Nvme0" 00:06:11.139 }, 00:06:11.139 "method": "bdev_nvme_attach_controller" 00:06:11.139 }, 00:06:11.139 { 00:06:11.139 "method": "bdev_wait_for_examine" 00:06:11.139 } 00:06:11.139 ] 00:06:11.139 } 00:06:11.139 ] 00:06:11.139 } 00:06:11.139 [2024-04-17 15:29:12.459208] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.397 [2024-04-17 15:29:12.607533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.965  Copying: 60/60 [kB] (average 58 MBps) 00:06:11.965 00:06:11.965 15:29:13 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:06:11.965 15:29:13 -- dd/basic_rw.sh@37 -- # gen_conf 00:06:11.965 15:29:13 -- dd/common.sh@31 -- # xtrace_disable 00:06:11.965 15:29:13 -- common/autotest_common.sh@10 -- # set +x 00:06:11.965 [2024-04-17 15:29:13.191281] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:11.965 [2024-04-17 15:29:13.191358] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62391 ] 00:06:11.965 { 00:06:11.965 "subsystems": [ 00:06:11.965 { 00:06:11.965 "subsystem": "bdev", 00:06:11.965 "config": [ 00:06:11.965 { 00:06:11.965 "params": { 00:06:11.965 "trtype": "pcie", 00:06:11.965 "traddr": "0000:00:10.0", 00:06:11.965 "name": "Nvme0" 00:06:11.965 }, 00:06:11.965 "method": "bdev_nvme_attach_controller" 00:06:11.965 }, 00:06:11.965 { 00:06:11.965 "method": "bdev_wait_for_examine" 00:06:11.965 } 00:06:11.965 ] 00:06:11.965 } 00:06:11.965 ] 00:06:11.965 } 00:06:11.965 [2024-04-17 15:29:13.326830] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.225 [2024-04-17 15:29:13.438472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.794  Copying: 60/60 [kB] (average 58 MBps) 00:06:12.794 00:06:12.794 15:29:13 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:12.794 15:29:13 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:12.794 15:29:13 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:12.794 15:29:13 -- dd/common.sh@11 -- # local nvme_ref= 00:06:12.794 15:29:13 -- dd/common.sh@12 -- # local size=61440 00:06:12.794 15:29:13 -- dd/common.sh@14 -- # local bs=1048576 00:06:12.794 15:29:13 -- dd/common.sh@15 -- # local count=1 00:06:12.794 15:29:13 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:12.794 15:29:13 -- dd/common.sh@18 -- # gen_conf 00:06:12.794 15:29:14 -- dd/common.sh@31 -- # xtrace_disable 00:06:12.794 15:29:14 -- common/autotest_common.sh@10 -- # set +x 00:06:12.794 [2024-04-17 15:29:14.056384] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:12.794 [2024-04-17 15:29:14.056499] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62412 ] 00:06:12.794 { 00:06:12.794 "subsystems": [ 00:06:12.794 { 00:06:12.794 "subsystem": "bdev", 00:06:12.794 "config": [ 00:06:12.794 { 00:06:12.794 "params": { 00:06:12.794 "trtype": "pcie", 00:06:12.794 "traddr": "0000:00:10.0", 00:06:12.794 "name": "Nvme0" 00:06:12.794 }, 00:06:12.794 "method": "bdev_nvme_attach_controller" 00:06:12.794 }, 00:06:12.794 { 00:06:12.794 "method": "bdev_wait_for_examine" 00:06:12.794 } 00:06:12.794 ] 00:06:12.794 } 00:06:12.794 ] 00:06:12.794 } 00:06:12.794 [2024-04-17 15:29:14.198692] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.052 [2024-04-17 15:29:14.329394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.568  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:13.568 00:06:13.568 15:29:14 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:13.568 15:29:14 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:13.568 15:29:14 -- dd/basic_rw.sh@23 -- # count=7 00:06:13.568 15:29:14 -- dd/basic_rw.sh@24 -- # count=7 00:06:13.568 15:29:14 -- dd/basic_rw.sh@25 -- # size=57344 00:06:13.568 15:29:14 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:13.568 15:29:14 -- dd/common.sh@98 -- # xtrace_disable 00:06:13.568 15:29:14 -- common/autotest_common.sh@10 -- # set +x 00:06:14.136 15:29:15 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:06:14.136 15:29:15 -- dd/basic_rw.sh@30 -- # gen_conf 00:06:14.136 15:29:15 -- dd/common.sh@31 -- # xtrace_disable 00:06:14.136 15:29:15 -- common/autotest_common.sh@10 -- # set +x 00:06:14.136 [2024-04-17 15:29:15.493979] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:14.136 [2024-04-17 15:29:15.494551] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62431 ] 00:06:14.136 { 00:06:14.136 "subsystems": [ 00:06:14.136 { 00:06:14.136 "subsystem": "bdev", 00:06:14.136 "config": [ 00:06:14.136 { 00:06:14.136 "params": { 00:06:14.136 "trtype": "pcie", 00:06:14.136 "traddr": "0000:00:10.0", 00:06:14.136 "name": "Nvme0" 00:06:14.136 }, 00:06:14.136 "method": "bdev_nvme_attach_controller" 00:06:14.136 }, 00:06:14.136 { 00:06:14.136 "method": "bdev_wait_for_examine" 00:06:14.136 } 00:06:14.136 ] 00:06:14.136 } 00:06:14.136 ] 00:06:14.136 } 00:06:14.395 [2024-04-17 15:29:15.634235] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.395 [2024-04-17 15:29:15.770624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.912  Copying: 56/56 [kB] (average 27 MBps) 00:06:14.912 00:06:14.912 15:29:16 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:06:14.912 15:29:16 -- dd/basic_rw.sh@37 -- # gen_conf 00:06:14.912 15:29:16 -- dd/common.sh@31 -- # xtrace_disable 00:06:14.912 15:29:16 -- common/autotest_common.sh@10 -- # set +x 00:06:15.172 [2024-04-17 15:29:16.360307] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:15.172 [2024-04-17 15:29:16.360388] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62450 ] 00:06:15.172 { 00:06:15.172 "subsystems": [ 00:06:15.172 { 00:06:15.172 "subsystem": "bdev", 00:06:15.172 "config": [ 00:06:15.172 { 00:06:15.172 "params": { 00:06:15.172 "trtype": "pcie", 00:06:15.172 "traddr": "0000:00:10.0", 00:06:15.172 "name": "Nvme0" 00:06:15.172 }, 00:06:15.172 "method": "bdev_nvme_attach_controller" 00:06:15.172 }, 00:06:15.172 { 00:06:15.172 "method": "bdev_wait_for_examine" 00:06:15.172 } 00:06:15.172 ] 00:06:15.172 } 00:06:15.172 ] 00:06:15.172 } 00:06:15.172 [2024-04-17 15:29:16.493285] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.430 [2024-04-17 15:29:16.622289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.998  Copying: 56/56 [kB] (average 27 MBps) 00:06:15.998 00:06:15.998 15:29:17 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:15.998 15:29:17 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:15.998 15:29:17 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:15.998 15:29:17 -- dd/common.sh@11 -- # local nvme_ref= 00:06:15.998 15:29:17 -- dd/common.sh@12 -- # local size=57344 00:06:15.998 15:29:17 -- dd/common.sh@14 -- # local bs=1048576 00:06:15.998 15:29:17 -- dd/common.sh@15 -- # local count=1 00:06:15.998 15:29:17 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:15.998 15:29:17 -- dd/common.sh@18 -- # gen_conf 00:06:15.998 15:29:17 -- dd/common.sh@31 -- # xtrace_disable 00:06:15.998 15:29:17 -- common/autotest_common.sh@10 -- # set +x 00:06:15.998 [2024-04-17 15:29:17.204870] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:15.998 [2024-04-17 15:29:17.204970] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62471 ] 00:06:15.998 { 00:06:15.998 "subsystems": [ 00:06:15.998 { 00:06:15.998 "subsystem": "bdev", 00:06:15.998 "config": [ 00:06:15.998 { 00:06:15.998 "params": { 00:06:15.998 "trtype": "pcie", 00:06:15.998 "traddr": "0000:00:10.0", 00:06:15.998 "name": "Nvme0" 00:06:15.998 }, 00:06:15.998 "method": "bdev_nvme_attach_controller" 00:06:15.998 }, 00:06:15.998 { 00:06:15.998 "method": "bdev_wait_for_examine" 00:06:15.998 } 00:06:15.998 ] 00:06:15.998 } 00:06:15.998 ] 00:06:15.998 } 00:06:15.998 [2024-04-17 15:29:17.342836] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.257 [2024-04-17 15:29:17.470945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.823  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:16.823 00:06:16.823 15:29:17 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:16.823 15:29:17 -- dd/basic_rw.sh@23 -- # count=7 00:06:16.823 15:29:17 -- dd/basic_rw.sh@24 -- # count=7 00:06:16.823 15:29:17 -- dd/basic_rw.sh@25 -- # size=57344 00:06:16.823 15:29:17 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:16.823 15:29:17 -- dd/common.sh@98 -- # xtrace_disable 00:06:16.823 15:29:17 -- common/autotest_common.sh@10 -- # set +x 00:06:17.391 15:29:18 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:06:17.391 15:29:18 -- dd/basic_rw.sh@30 -- # gen_conf 00:06:17.391 15:29:18 -- dd/common.sh@31 -- # xtrace_disable 00:06:17.391 15:29:18 -- common/autotest_common.sh@10 -- # set +x 00:06:17.391 [2024-04-17 15:29:18.589626] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:17.391 [2024-04-17 15:29:18.589726] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62490 ] 00:06:17.391 { 00:06:17.391 "subsystems": [ 00:06:17.391 { 00:06:17.391 "subsystem": "bdev", 00:06:17.391 "config": [ 00:06:17.391 { 00:06:17.391 "params": { 00:06:17.391 "trtype": "pcie", 00:06:17.391 "traddr": "0000:00:10.0", 00:06:17.391 "name": "Nvme0" 00:06:17.391 }, 00:06:17.391 "method": "bdev_nvme_attach_controller" 00:06:17.391 }, 00:06:17.391 { 00:06:17.391 "method": "bdev_wait_for_examine" 00:06:17.391 } 00:06:17.391 ] 00:06:17.391 } 00:06:17.391 ] 00:06:17.391 } 00:06:17.391 [2024-04-17 15:29:18.728128] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.650 [2024-04-17 15:29:18.855773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.217  Copying: 56/56 [kB] (average 54 MBps) 00:06:18.217 00:06:18.217 15:29:19 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:06:18.217 15:29:19 -- dd/basic_rw.sh@37 -- # gen_conf 00:06:18.217 15:29:19 -- dd/common.sh@31 -- # xtrace_disable 00:06:18.217 15:29:19 -- common/autotest_common.sh@10 -- # set +x 00:06:18.217 [2024-04-17 15:29:19.440625] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:18.217 [2024-04-17 15:29:19.440723] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62509 ] 00:06:18.217 { 00:06:18.217 "subsystems": [ 00:06:18.217 { 00:06:18.217 "subsystem": "bdev", 00:06:18.217 "config": [ 00:06:18.217 { 00:06:18.217 "params": { 00:06:18.217 "trtype": "pcie", 00:06:18.217 "traddr": "0000:00:10.0", 00:06:18.217 "name": "Nvme0" 00:06:18.217 }, 00:06:18.217 "method": "bdev_nvme_attach_controller" 00:06:18.217 }, 00:06:18.217 { 00:06:18.217 "method": "bdev_wait_for_examine" 00:06:18.217 } 00:06:18.217 ] 00:06:18.217 } 00:06:18.217 ] 00:06:18.217 } 00:06:18.217 [2024-04-17 15:29:19.579492] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.476 [2024-04-17 15:29:19.711926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.045  Copying: 56/56 [kB] (average 54 MBps) 00:06:19.045 00:06:19.045 15:29:20 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:19.045 15:29:20 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:19.045 15:29:20 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:19.045 15:29:20 -- dd/common.sh@11 -- # local nvme_ref= 00:06:19.045 15:29:20 -- dd/common.sh@12 -- # local size=57344 00:06:19.045 15:29:20 -- dd/common.sh@14 -- # local bs=1048576 00:06:19.045 15:29:20 -- dd/common.sh@15 -- # local count=1 00:06:19.045 15:29:20 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:19.045 15:29:20 -- dd/common.sh@18 -- # gen_conf 00:06:19.045 15:29:20 -- dd/common.sh@31 -- # xtrace_disable 00:06:19.045 15:29:20 -- common/autotest_common.sh@10 -- # set +x 00:06:19.045 { 00:06:19.045 "subsystems": [ 00:06:19.045 { 00:06:19.045 "subsystem": "bdev", 00:06:19.045 "config": [ 00:06:19.045 { 00:06:19.045 "params": { 00:06:19.045 "trtype": "pcie", 00:06:19.045 "traddr": "0000:00:10.0", 00:06:19.045 "name": "Nvme0" 00:06:19.045 }, 00:06:19.045 "method": "bdev_nvme_attach_controller" 00:06:19.045 }, 00:06:19.045 { 00:06:19.045 "method": "bdev_wait_for_examine" 00:06:19.045 } 00:06:19.045 ] 00:06:19.045 } 00:06:19.045 ] 00:06:19.045 } 00:06:19.045 [2024-04-17 15:29:20.331556] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:19.045 [2024-04-17 15:29:20.331668] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62530 ] 00:06:19.045 [2024-04-17 15:29:20.477218] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.304 [2024-04-17 15:29:20.589655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.823  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:19.823 00:06:19.823 15:29:21 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:19.823 15:29:21 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:19.823 15:29:21 -- dd/basic_rw.sh@23 -- # count=3 00:06:19.823 15:29:21 -- dd/basic_rw.sh@24 -- # count=3 00:06:19.823 15:29:21 -- dd/basic_rw.sh@25 -- # size=49152 00:06:19.823 15:29:21 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:19.823 15:29:21 -- dd/common.sh@98 -- # xtrace_disable 00:06:19.823 15:29:21 -- common/autotest_common.sh@10 -- # set +x 00:06:20.391 15:29:21 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:06:20.391 15:29:21 -- dd/basic_rw.sh@30 -- # gen_conf 00:06:20.391 15:29:21 -- dd/common.sh@31 -- # xtrace_disable 00:06:20.391 15:29:21 -- common/autotest_common.sh@10 -- # set +x 00:06:20.391 [2024-04-17 15:29:21.705321] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:20.391 [2024-04-17 15:29:21.705436] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62549 ] 00:06:20.391 { 00:06:20.391 "subsystems": [ 00:06:20.391 { 00:06:20.391 "subsystem": "bdev", 00:06:20.391 "config": [ 00:06:20.391 { 00:06:20.391 "params": { 00:06:20.391 "trtype": "pcie", 00:06:20.391 "traddr": "0000:00:10.0", 00:06:20.391 "name": "Nvme0" 00:06:20.391 }, 00:06:20.391 "method": "bdev_nvme_attach_controller" 00:06:20.391 }, 00:06:20.391 { 00:06:20.391 "method": "bdev_wait_for_examine" 00:06:20.391 } 00:06:20.391 ] 00:06:20.391 } 00:06:20.391 ] 00:06:20.391 } 00:06:20.649 [2024-04-17 15:29:21.846755] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.649 [2024-04-17 15:29:21.974484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.166  Copying: 48/48 [kB] (average 46 MBps) 00:06:21.166 00:06:21.166 15:29:22 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:06:21.166 15:29:22 -- dd/basic_rw.sh@37 -- # gen_conf 00:06:21.166 15:29:22 -- dd/common.sh@31 -- # xtrace_disable 00:06:21.166 15:29:22 -- common/autotest_common.sh@10 -- # set +x 00:06:21.166 [2024-04-17 15:29:22.563963] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:21.166 [2024-04-17 15:29:22.564508] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62568 ] 00:06:21.166 { 00:06:21.166 "subsystems": [ 00:06:21.166 { 00:06:21.166 "subsystem": "bdev", 00:06:21.166 "config": [ 00:06:21.166 { 00:06:21.166 "params": { 00:06:21.166 "trtype": "pcie", 00:06:21.166 "traddr": "0000:00:10.0", 00:06:21.166 "name": "Nvme0" 00:06:21.166 }, 00:06:21.166 "method": "bdev_nvme_attach_controller" 00:06:21.166 }, 00:06:21.166 { 00:06:21.166 "method": "bdev_wait_for_examine" 00:06:21.166 } 00:06:21.166 ] 00:06:21.166 } 00:06:21.166 ] 00:06:21.166 } 00:06:21.424 [2024-04-17 15:29:22.704001] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.424 [2024-04-17 15:29:22.818641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.943  Copying: 48/48 [kB] (average 46 MBps) 00:06:21.943 00:06:21.943 15:29:23 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:21.943 15:29:23 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:21.943 15:29:23 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:21.943 15:29:23 -- dd/common.sh@11 -- # local nvme_ref= 00:06:21.943 15:29:23 -- dd/common.sh@12 -- # local size=49152 00:06:21.944 15:29:23 -- dd/common.sh@14 -- # local bs=1048576 00:06:21.944 15:29:23 -- dd/common.sh@15 -- # local count=1 00:06:21.944 15:29:23 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:21.944 15:29:23 -- dd/common.sh@18 -- # gen_conf 00:06:21.944 15:29:23 -- dd/common.sh@31 -- # xtrace_disable 00:06:21.944 15:29:23 -- common/autotest_common.sh@10 -- # set +x 00:06:22.201 [2024-04-17 15:29:23.428176] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:22.201 [2024-04-17 15:29:23.428533] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62589 ] 00:06:22.201 { 00:06:22.201 "subsystems": [ 00:06:22.201 { 00:06:22.201 "subsystem": "bdev", 00:06:22.201 "config": [ 00:06:22.201 { 00:06:22.201 "params": { 00:06:22.201 "trtype": "pcie", 00:06:22.201 "traddr": "0000:00:10.0", 00:06:22.201 "name": "Nvme0" 00:06:22.201 }, 00:06:22.201 "method": "bdev_nvme_attach_controller" 00:06:22.201 }, 00:06:22.201 { 00:06:22.201 "method": "bdev_wait_for_examine" 00:06:22.201 } 00:06:22.201 ] 00:06:22.201 } 00:06:22.201 ] 00:06:22.201 } 00:06:22.201 [2024-04-17 15:29:23.567835] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.459 [2024-04-17 15:29:23.698187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.026  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:23.026 00:06:23.026 15:29:24 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:23.026 15:29:24 -- dd/basic_rw.sh@23 -- # count=3 00:06:23.026 15:29:24 -- dd/basic_rw.sh@24 -- # count=3 00:06:23.026 15:29:24 -- dd/basic_rw.sh@25 -- # size=49152 00:06:23.026 15:29:24 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:23.026 15:29:24 -- dd/common.sh@98 -- # xtrace_disable 00:06:23.026 15:29:24 -- common/autotest_common.sh@10 -- # set +x 00:06:23.284 15:29:24 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:06:23.284 15:29:24 -- dd/basic_rw.sh@30 -- # gen_conf 00:06:23.284 15:29:24 -- dd/common.sh@31 -- # xtrace_disable 00:06:23.284 15:29:24 -- common/autotest_common.sh@10 -- # set +x 00:06:23.543 [2024-04-17 15:29:24.776072] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:23.543 [2024-04-17 15:29:24.776204] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62608 ] 00:06:23.543 { 00:06:23.543 "subsystems": [ 00:06:23.543 { 00:06:23.543 "subsystem": "bdev", 00:06:23.543 "config": [ 00:06:23.543 { 00:06:23.543 "params": { 00:06:23.543 "trtype": "pcie", 00:06:23.543 "traddr": "0000:00:10.0", 00:06:23.543 "name": "Nvme0" 00:06:23.543 }, 00:06:23.543 "method": "bdev_nvme_attach_controller" 00:06:23.543 }, 00:06:23.543 { 00:06:23.543 "method": "bdev_wait_for_examine" 00:06:23.543 } 00:06:23.543 ] 00:06:23.543 } 00:06:23.543 ] 00:06:23.543 } 00:06:23.543 [2024-04-17 15:29:24.916073] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.801 [2024-04-17 15:29:25.050430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.368  Copying: 48/48 [kB] (average 46 MBps) 00:06:24.368 00:06:24.368 15:29:25 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:06:24.368 15:29:25 -- dd/basic_rw.sh@37 -- # gen_conf 00:06:24.368 15:29:25 -- dd/common.sh@31 -- # xtrace_disable 00:06:24.368 15:29:25 -- common/autotest_common.sh@10 -- # set +x 00:06:24.368 [2024-04-17 15:29:25.636113] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:24.368 [2024-04-17 15:29:25.636234] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62627 ] 00:06:24.368 { 00:06:24.368 "subsystems": [ 00:06:24.368 { 00:06:24.368 "subsystem": "bdev", 00:06:24.368 "config": [ 00:06:24.368 { 00:06:24.368 "params": { 00:06:24.368 "trtype": "pcie", 00:06:24.368 "traddr": "0000:00:10.0", 00:06:24.368 "name": "Nvme0" 00:06:24.368 }, 00:06:24.368 "method": "bdev_nvme_attach_controller" 00:06:24.368 }, 00:06:24.368 { 00:06:24.369 "method": "bdev_wait_for_examine" 00:06:24.369 } 00:06:24.369 ] 00:06:24.369 } 00:06:24.369 ] 00:06:24.369 } 00:06:24.369 [2024-04-17 15:29:25.774933] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.628 [2024-04-17 15:29:25.908331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.146  Copying: 48/48 [kB] (average 46 MBps) 00:06:25.146 00:06:25.146 15:29:26 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:25.146 15:29:26 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:25.146 15:29:26 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:25.146 15:29:26 -- dd/common.sh@11 -- # local nvme_ref= 00:06:25.146 15:29:26 -- dd/common.sh@12 -- # local size=49152 00:06:25.146 15:29:26 -- dd/common.sh@14 -- # local bs=1048576 00:06:25.146 15:29:26 -- dd/common.sh@15 -- # local count=1 00:06:25.146 15:29:26 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:25.146 15:29:26 -- dd/common.sh@18 -- # gen_conf 00:06:25.146 15:29:26 -- dd/common.sh@31 -- # xtrace_disable 00:06:25.146 15:29:26 -- common/autotest_common.sh@10 -- # set +x 00:06:25.146 { 00:06:25.146 "subsystems": [ 00:06:25.146 { 00:06:25.146 "subsystem": "bdev", 00:06:25.146 "config": [ 00:06:25.146 { 00:06:25.146 "params": { 00:06:25.146 "trtype": "pcie", 00:06:25.146 "traddr": "0000:00:10.0", 00:06:25.146 "name": "Nvme0" 00:06:25.146 }, 00:06:25.146 "method": "bdev_nvme_attach_controller" 00:06:25.146 }, 00:06:25.146 { 00:06:25.146 "method": "bdev_wait_for_examine" 00:06:25.146 } 00:06:25.146 ] 00:06:25.146 } 00:06:25.146 ] 00:06:25.146 } 00:06:25.146 [2024-04-17 15:29:26.502187] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:25.146 [2024-04-17 15:29:26.502295] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62648 ] 00:06:25.404 [2024-04-17 15:29:26.640207] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.404 [2024-04-17 15:29:26.767664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.921  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:25.921 00:06:25.921 00:06:25.921 real 0m19.046s 00:06:25.921 user 0m14.134s 00:06:25.921 sys 0m7.332s 00:06:25.921 15:29:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:25.921 ************************************ 00:06:25.921 END TEST dd_rw 00:06:25.921 ************************************ 00:06:25.921 15:29:27 -- common/autotest_common.sh@10 -- # set +x 00:06:25.921 15:29:27 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:06:25.921 15:29:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:25.921 15:29:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.921 15:29:27 -- common/autotest_common.sh@10 -- # set +x 00:06:26.179 ************************************ 00:06:26.179 START TEST dd_rw_offset 00:06:26.179 ************************************ 00:06:26.179 15:29:27 -- common/autotest_common.sh@1111 -- # basic_offset 00:06:26.179 15:29:27 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:06:26.179 15:29:27 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:06:26.179 15:29:27 -- dd/common.sh@98 -- # xtrace_disable 00:06:26.179 15:29:27 -- common/autotest_common.sh@10 -- # set +x 00:06:26.179 15:29:27 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:06:26.179 15:29:27 -- dd/basic_rw.sh@56 -- # data=rke0790vbv1c6r9kmuc8trd7cwgzhzml9egdnnfz1txsy5vru2re8tsjrni32myppm13pp486unb1vdiapd8v4593ynpukog6w28f5lptjary8xsc42bplio1lssc91gf8xoljv6vvpm9xs5twfllqgqq177q569rtceqruua0ydkrmu28kp2vi3one94pk5rz58z5sljipxjiey6aj4vy1zfffs11ci2h8unue9r6iyzwnirh7h5pe0whd37jos1jxx95l613y2l7e7vmk4kr8mmlkophe8le0hnmvsuwntf2qr3smkf9xnl49urqzo2qdfim1l8ogo03bqe7hh60gc990bh89wo7ejv2sp80j64lmoqxg06859gc3ne6kfu7depyx3rfea2h0485zqwdnysas8eddqb4urpsmyy9mmd5a19yl7man2744fhkompm0bp349rbqwigc2rv37iacl0whmnt95id8m0e6zer2fsez9adl25lvwzn65iwbwy3ft1jn4dv82m129duj9t3idtxf7osz7rqmqxp8zuuw8p2tc688o3htwusshkwtlf2mbss6k62f5yhvh8i2ljggnn4yx2vctz14oe9a88qhgrohaeip7eegudpa5x4sds9gjteh5goe8f4blrl7h3q2q5z8c4siq39h4mgk97pptzj69qikwuj195kmu4xg7i9gw6ej23vzjcy6xtpdaeny2lf3fkzi2rrr6l47nfaln88694olnfbckr197g7lvxl3yxvxvb7bak2q1et2avfz982e3tmj976r4rwqdcsb7xf9pr1goze6ashvtfw9mb1ke8893nlaarm5bf3ahyrtpf65fdyyrde5ot046qwcwgajfu2sahauxv3mhfhzbhjnq8v745rtlppwdvkrak8sk188w64pezhv4erbf913a659inx1g2vixeaevfoxdnck2o33x6tvqx79su3knwt5ulszzweqpk8w4pymye4jyebnsh1mr03jtyh18en238fqkvh19cgix0ap6mibvrevgkmfj8egjajoetemlpg6heorpsz426cxkg37r406ijp8uc6h4c5ze74yjajfb0lg05nxwfzp7qadq415zcllyiyzjofwake0izharuqpu180rsnspz2vdtc10hk9sx9fnln5b72bknf7tb7abazos9mjl5kot2prry3musivneriamm0plc9iskiz0m7dgo413ble99bd8a0bbrs50qs8dlnzmhvfbwglhe9yoz9tby69elru8vqd9oakyfzgu5ce2a2folrbddxgvtcytnb9kplngyykp03clvm1d0v90nifcy4tqwan9co372xr746hz0ni0x7nadfeq1zwg7otgsc63qwutkl2sqtwnvhlkva9vxtej8durqg5f5lfesh42980nheso2hwbaaftg5c9c2zn1f50k4e5i4vassu7zchpn6ddanzyt6daoiv1zcvzufrr2l9vncx2058ja31pe1pz2ijn9szyjnnohel485bswu5of9g7hj7pxrnzubscup9tszebg686gm46k743ud7vulot4gxvlxme9yjcrbw436tt9vz18156etuxluptyhkbiy17dqmrbezoqi2n5h8l5nylupf38zyknlal9yndjdfkr2m74s7nsfb5ssbbjvw5aav2ir1a26dsnqhspnj9onzzbx4fd55bjlum7ay3lt9qwt0xyo20mid7fdjjwtycybs6ahnbccbglrn4bde54ybmnt2xf7j27114zd8emk4zr35e15p68m911a8jiyppsn1dbzc8yg50o35596mwc7gigvx98awigxtu9he7qsm164qprc55tl8tzi6sfsk4kw3a0c4of0dfd5zo7371c24v1w4ady7fh7ofga0as36gaelsarbib6aq29ifd3og5tdp4gs1z9ju2ngdxowunp10ls5e3urfe9hos0wmevv5n4wqdmckpn4caryraqiblc2z1sjqxp5xdm3z8dmtqeqeoxwczin2j53kpmltoxhhpawwex9ws2ne7cf5spdfzbrhjzwdmtzw6tsg6dx2t4tplfexqnpjjp9metxdqsj3ff5kf698h5r9xx61glj5pt2q2ay7xicpb2k7w59t5mw2wd5wwgug931q9nav6scetnicl1y87mcbllp7bi2chz65fyu96xoskxbpr7apujf38zsuc18khbh7dae48un4d3p7czbzeux898b3t25qzjzc7p8hxhfeamyfw0q1qsjz84m7su0htbemkrgdbnx23nlgobaad5ue9tsf82ilosbhyiayzluzbybnd4fs3xmlbcpwiee3z6be2bzb5h962wdm1tgrwqo2jngycizb9ze94iivbqnnybe51dmc5jr4b0jkcaipm3snr5frjtwlxapsfzf27fj46b6m7j3um55nmiu06tyi8iws931liircq5brxz1k9le94cwi18ic0mqtb53izpxnlvnrtv4ejrchf2jvo9lsnhw7zd2dol7o6npkhkpjzlrdoexg81oo9kumutg2oahbt4b5uvqoo0kjadkg6lbj3t1d3o1vt6rnbsnavrtaysjmwodjejamenxjo7zz8whsx5pm2nldz3wl5zm6n0h82zfj9bxd51nfwv7cqcuamhne0zownmqd1febv6k9kb5h5s13k5dsknkal0ofcqpzmqtyfmsmqpvebsr5vqb1r4wzalzwq7yvmtikqhnfswx20ym7pjr1nt3nmf0ylade0d56vrf9lavemhlqwud9hpdxxhlzlkxu6c1m14l4s4a3o9r612pd6vwozjxux09f8ph9f9yx2jziqz2xuqvbbbvgp1sykav3cfl0w13dhcfezx10fjp7h5ype6ac4fx54sxshaaxd4jee5bcvga5oeslwmagtqpfja2wr7iqf05s5al328uqgw5klsg21wqj07i4okj3wbp9wo7wjry5ew5i7tky2xw1d53v18h9mcfzgr9fgzn5p6usx8ooagkqpileaimeiix4p39h3hkjodrbtn1x1hkza0gqpscnfl8zutfm3zca6du049peb9ccayp42bmlbgq5svxbzqsffo305uvvp45og46vexsa3ghar55fjqh2encpmyhhlli8x28j4qdh7bfklk76s1hvrjjl90r0b7ikl55slrt2txsq9w1tvm9i9z2nz45427rzf965cbke03o7ailer8yg75zaksqhylusthmqdsg518k0ex7xnlhlsuxlgm7m08c9q5q4r92m9rtr238curwi86197xgl5g75fdmv8gvsnvq95jza43l3g3a1zs72yty0argfi9eavgczkdaqt2dm83hbnck3sfgaxn4ctedkxe7a4s9jm3l8u09xtrgpygrydd99nszvzvjrjt6on9banybm4y5h6w9qt2487glrs5lgyac47r1cyb7icppmj92tuxnk3e3ymg2mimjszhy9skh8nb72bxgjslebz2uoxqh873j6wft8s54d3heiodn5zxlzdat71vgfnce4q4qwozotfh326xd78un1po00o1n6rc8zgh8vof2dsw4ozgy62tgkhoy41geze1z77xc5aqgalembewss25e4d5l9c0lias1aqq4ota8sp96jtho2ixo93ascmyfl6argbfbkah73urehyqrljaqefpy8n848zopk79hy5622plq1hs3v8s8qyinbhd6wq9jon83722ert8848xfk1a74zcqquzo5nztbyeec53k601p2a46rr3u7yn6wgtgzyac6yztbk7zu1yi3hykwk07b2bo0rfqh95yp4jt6hwvyv8kikxw9dnondfzhgiopp9d47wctz39v67hz1rq2g4pou6n8xy05wgnz5wze6d3znny7qu4fozr2duf79366zqjb2w4o6lpm4bqsvyq1sknnuasvdooqr2fn0ogbivks5mrddw8rpklxfwvfx7ad7s5ep63x33h724oohr1qyi5dzy11icr6h3yr6g1ryma1de6dv21ty42gd5bbq3bel4p1co8giwt 00:06:26.179 15:29:27 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:06:26.179 15:29:27 -- dd/basic_rw.sh@59 -- # gen_conf 00:06:26.179 15:29:27 -- dd/common.sh@31 -- # xtrace_disable 00:06:26.179 15:29:27 -- common/autotest_common.sh@10 -- # set +x 00:06:26.179 [2024-04-17 15:29:27.502511] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:26.179 [2024-04-17 15:29:27.502614] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62688 ] 00:06:26.179 { 00:06:26.179 "subsystems": [ 00:06:26.179 { 00:06:26.179 "subsystem": "bdev", 00:06:26.179 "config": [ 00:06:26.179 { 00:06:26.179 "params": { 00:06:26.179 "trtype": "pcie", 00:06:26.180 "traddr": "0000:00:10.0", 00:06:26.180 "name": "Nvme0" 00:06:26.180 }, 00:06:26.180 "method": "bdev_nvme_attach_controller" 00:06:26.180 }, 00:06:26.180 { 00:06:26.180 "method": "bdev_wait_for_examine" 00:06:26.180 } 00:06:26.180 ] 00:06:26.180 } 00:06:26.180 ] 00:06:26.180 } 00:06:26.438 [2024-04-17 15:29:27.643119] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.438 [2024-04-17 15:29:27.778135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.955  Copying: 4096/4096 [B] (average 4000 kBps) 00:06:26.955 00:06:26.955 15:29:28 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:06:26.955 15:29:28 -- dd/basic_rw.sh@65 -- # gen_conf 00:06:26.955 15:29:28 -- dd/common.sh@31 -- # xtrace_disable 00:06:26.955 15:29:28 -- common/autotest_common.sh@10 -- # set +x 00:06:26.955 [2024-04-17 15:29:28.367127] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:26.955 [2024-04-17 15:29:28.367245] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62702 ] 00:06:26.955 { 00:06:26.955 "subsystems": [ 00:06:26.955 { 00:06:26.955 "subsystem": "bdev", 00:06:26.955 "config": [ 00:06:26.955 { 00:06:26.955 "params": { 00:06:26.955 "trtype": "pcie", 00:06:26.955 "traddr": "0000:00:10.0", 00:06:26.955 "name": "Nvme0" 00:06:26.955 }, 00:06:26.955 "method": "bdev_nvme_attach_controller" 00:06:26.955 }, 00:06:26.955 { 00:06:26.955 "method": "bdev_wait_for_examine" 00:06:26.955 } 00:06:26.955 ] 00:06:26.955 } 00:06:26.955 ] 00:06:26.955 } 00:06:27.214 [2024-04-17 15:29:28.500716] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.473 [2024-04-17 15:29:28.660920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.042  Copying: 4096/4096 [B] (average 4000 kBps) 00:06:28.042 00:06:28.042 15:29:29 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:06:28.042 ************************************ 00:06:28.042 END TEST dd_rw_offset 00:06:28.042 ************************************ 00:06:28.043 15:29:29 -- dd/basic_rw.sh@72 -- # [[ rke0790vbv1c6r9kmuc8trd7cwgzhzml9egdnnfz1txsy5vru2re8tsjrni32myppm13pp486unb1vdiapd8v4593ynpukog6w28f5lptjary8xsc42bplio1lssc91gf8xoljv6vvpm9xs5twfllqgqq177q569rtceqruua0ydkrmu28kp2vi3one94pk5rz58z5sljipxjiey6aj4vy1zfffs11ci2h8unue9r6iyzwnirh7h5pe0whd37jos1jxx95l613y2l7e7vmk4kr8mmlkophe8le0hnmvsuwntf2qr3smkf9xnl49urqzo2qdfim1l8ogo03bqe7hh60gc990bh89wo7ejv2sp80j64lmoqxg06859gc3ne6kfu7depyx3rfea2h0485zqwdnysas8eddqb4urpsmyy9mmd5a19yl7man2744fhkompm0bp349rbqwigc2rv37iacl0whmnt95id8m0e6zer2fsez9adl25lvwzn65iwbwy3ft1jn4dv82m129duj9t3idtxf7osz7rqmqxp8zuuw8p2tc688o3htwusshkwtlf2mbss6k62f5yhvh8i2ljggnn4yx2vctz14oe9a88qhgrohaeip7eegudpa5x4sds9gjteh5goe8f4blrl7h3q2q5z8c4siq39h4mgk97pptzj69qikwuj195kmu4xg7i9gw6ej23vzjcy6xtpdaeny2lf3fkzi2rrr6l47nfaln88694olnfbckr197g7lvxl3yxvxvb7bak2q1et2avfz982e3tmj976r4rwqdcsb7xf9pr1goze6ashvtfw9mb1ke8893nlaarm5bf3ahyrtpf65fdyyrde5ot046qwcwgajfu2sahauxv3mhfhzbhjnq8v745rtlppwdvkrak8sk188w64pezhv4erbf913a659inx1g2vixeaevfoxdnck2o33x6tvqx79su3knwt5ulszzweqpk8w4pymye4jyebnsh1mr03jtyh18en238fqkvh19cgix0ap6mibvrevgkmfj8egjajoetemlpg6heorpsz426cxkg37r406ijp8uc6h4c5ze74yjajfb0lg05nxwfzp7qadq415zcllyiyzjofwake0izharuqpu180rsnspz2vdtc10hk9sx9fnln5b72bknf7tb7abazos9mjl5kot2prry3musivneriamm0plc9iskiz0m7dgo413ble99bd8a0bbrs50qs8dlnzmhvfbwglhe9yoz9tby69elru8vqd9oakyfzgu5ce2a2folrbddxgvtcytnb9kplngyykp03clvm1d0v90nifcy4tqwan9co372xr746hz0ni0x7nadfeq1zwg7otgsc63qwutkl2sqtwnvhlkva9vxtej8durqg5f5lfesh42980nheso2hwbaaftg5c9c2zn1f50k4e5i4vassu7zchpn6ddanzyt6daoiv1zcvzufrr2l9vncx2058ja31pe1pz2ijn9szyjnnohel485bswu5of9g7hj7pxrnzubscup9tszebg686gm46k743ud7vulot4gxvlxme9yjcrbw436tt9vz18156etuxluptyhkbiy17dqmrbezoqi2n5h8l5nylupf38zyknlal9yndjdfkr2m74s7nsfb5ssbbjvw5aav2ir1a26dsnqhspnj9onzzbx4fd55bjlum7ay3lt9qwt0xyo20mid7fdjjwtycybs6ahnbccbglrn4bde54ybmnt2xf7j27114zd8emk4zr35e15p68m911a8jiyppsn1dbzc8yg50o35596mwc7gigvx98awigxtu9he7qsm164qprc55tl8tzi6sfsk4kw3a0c4of0dfd5zo7371c24v1w4ady7fh7ofga0as36gaelsarbib6aq29ifd3og5tdp4gs1z9ju2ngdxowunp10ls5e3urfe9hos0wmevv5n4wqdmckpn4caryraqiblc2z1sjqxp5xdm3z8dmtqeqeoxwczin2j53kpmltoxhhpawwex9ws2ne7cf5spdfzbrhjzwdmtzw6tsg6dx2t4tplfexqnpjjp9metxdqsj3ff5kf698h5r9xx61glj5pt2q2ay7xicpb2k7w59t5mw2wd5wwgug931q9nav6scetnicl1y87mcbllp7bi2chz65fyu96xoskxbpr7apujf38zsuc18khbh7dae48un4d3p7czbzeux898b3t25qzjzc7p8hxhfeamyfw0q1qsjz84m7su0htbemkrgdbnx23nlgobaad5ue9tsf82ilosbhyiayzluzbybnd4fs3xmlbcpwiee3z6be2bzb5h962wdm1tgrwqo2jngycizb9ze94iivbqnnybe51dmc5jr4b0jkcaipm3snr5frjtwlxapsfzf27fj46b6m7j3um55nmiu06tyi8iws931liircq5brxz1k9le94cwi18ic0mqtb53izpxnlvnrtv4ejrchf2jvo9lsnhw7zd2dol7o6npkhkpjzlrdoexg81oo9kumutg2oahbt4b5uvqoo0kjadkg6lbj3t1d3o1vt6rnbsnavrtaysjmwodjejamenxjo7zz8whsx5pm2nldz3wl5zm6n0h82zfj9bxd51nfwv7cqcuamhne0zownmqd1febv6k9kb5h5s13k5dsknkal0ofcqpzmqtyfmsmqpvebsr5vqb1r4wzalzwq7yvmtikqhnfswx20ym7pjr1nt3nmf0ylade0d56vrf9lavemhlqwud9hpdxxhlzlkxu6c1m14l4s4a3o9r612pd6vwozjxux09f8ph9f9yx2jziqz2xuqvbbbvgp1sykav3cfl0w13dhcfezx10fjp7h5ype6ac4fx54sxshaaxd4jee5bcvga5oeslwmagtqpfja2wr7iqf05s5al328uqgw5klsg21wqj07i4okj3wbp9wo7wjry5ew5i7tky2xw1d53v18h9mcfzgr9fgzn5p6usx8ooagkqpileaimeiix4p39h3hkjodrbtn1x1hkza0gqpscnfl8zutfm3zca6du049peb9ccayp42bmlbgq5svxbzqsffo305uvvp45og46vexsa3ghar55fjqh2encpmyhhlli8x28j4qdh7bfklk76s1hvrjjl90r0b7ikl55slrt2txsq9w1tvm9i9z2nz45427rzf965cbke03o7ailer8yg75zaksqhylusthmqdsg518k0ex7xnlhlsuxlgm7m08c9q5q4r92m9rtr238curwi86197xgl5g75fdmv8gvsnvq95jza43l3g3a1zs72yty0argfi9eavgczkdaqt2dm83hbnck3sfgaxn4ctedkxe7a4s9jm3l8u09xtrgpygrydd99nszvzvjrjt6on9banybm4y5h6w9qt2487glrs5lgyac47r1cyb7icppmj92tuxnk3e3ymg2mimjszhy9skh8nb72bxgjslebz2uoxqh873j6wft8s54d3heiodn5zxlzdat71vgfnce4q4qwozotfh326xd78un1po00o1n6rc8zgh8vof2dsw4ozgy62tgkhoy41geze1z77xc5aqgalembewss25e4d5l9c0lias1aqq4ota8sp96jtho2ixo93ascmyfl6argbfbkah73urehyqrljaqefpy8n848zopk79hy5622plq1hs3v8s8qyinbhd6wq9jon83722ert8848xfk1a74zcqquzo5nztbyeec53k601p2a46rr3u7yn6wgtgzyac6yztbk7zu1yi3hykwk07b2bo0rfqh95yp4jt6hwvyv8kikxw9dnondfzhgiopp9d47wctz39v67hz1rq2g4pou6n8xy05wgnz5wze6d3znny7qu4fozr2duf79366zqjb2w4o6lpm4bqsvyq1sknnuasvdooqr2fn0ogbivks5mrddw8rpklxfwvfx7ad7s5ep63x33h724oohr1qyi5dzy11icr6h3yr6g1ryma1de6dv21ty42gd5bbq3bel4p1co8giwt == \r\k\e\0\7\9\0\v\b\v\1\c\6\r\9\k\m\u\c\8\t\r\d\7\c\w\g\z\h\z\m\l\9\e\g\d\n\n\f\z\1\t\x\s\y\5\v\r\u\2\r\e\8\t\s\j\r\n\i\3\2\m\y\p\p\m\1\3\p\p\4\8\6\u\n\b\1\v\d\i\a\p\d\8\v\4\5\9\3\y\n\p\u\k\o\g\6\w\2\8\f\5\l\p\t\j\a\r\y\8\x\s\c\4\2\b\p\l\i\o\1\l\s\s\c\9\1\g\f\8\x\o\l\j\v\6\v\v\p\m\9\x\s\5\t\w\f\l\l\q\g\q\q\1\7\7\q\5\6\9\r\t\c\e\q\r\u\u\a\0\y\d\k\r\m\u\2\8\k\p\2\v\i\3\o\n\e\9\4\p\k\5\r\z\5\8\z\5\s\l\j\i\p\x\j\i\e\y\6\a\j\4\v\y\1\z\f\f\f\s\1\1\c\i\2\h\8\u\n\u\e\9\r\6\i\y\z\w\n\i\r\h\7\h\5\p\e\0\w\h\d\3\7\j\o\s\1\j\x\x\9\5\l\6\1\3\y\2\l\7\e\7\v\m\k\4\k\r\8\m\m\l\k\o\p\h\e\8\l\e\0\h\n\m\v\s\u\w\n\t\f\2\q\r\3\s\m\k\f\9\x\n\l\4\9\u\r\q\z\o\2\q\d\f\i\m\1\l\8\o\g\o\0\3\b\q\e\7\h\h\6\0\g\c\9\9\0\b\h\8\9\w\o\7\e\j\v\2\s\p\8\0\j\6\4\l\m\o\q\x\g\0\6\8\5\9\g\c\3\n\e\6\k\f\u\7\d\e\p\y\x\3\r\f\e\a\2\h\0\4\8\5\z\q\w\d\n\y\s\a\s\8\e\d\d\q\b\4\u\r\p\s\m\y\y\9\m\m\d\5\a\1\9\y\l\7\m\a\n\2\7\4\4\f\h\k\o\m\p\m\0\b\p\3\4\9\r\b\q\w\i\g\c\2\r\v\3\7\i\a\c\l\0\w\h\m\n\t\9\5\i\d\8\m\0\e\6\z\e\r\2\f\s\e\z\9\a\d\l\2\5\l\v\w\z\n\6\5\i\w\b\w\y\3\f\t\1\j\n\4\d\v\8\2\m\1\2\9\d\u\j\9\t\3\i\d\t\x\f\7\o\s\z\7\r\q\m\q\x\p\8\z\u\u\w\8\p\2\t\c\6\8\8\o\3\h\t\w\u\s\s\h\k\w\t\l\f\2\m\b\s\s\6\k\6\2\f\5\y\h\v\h\8\i\2\l\j\g\g\n\n\4\y\x\2\v\c\t\z\1\4\o\e\9\a\8\8\q\h\g\r\o\h\a\e\i\p\7\e\e\g\u\d\p\a\5\x\4\s\d\s\9\g\j\t\e\h\5\g\o\e\8\f\4\b\l\r\l\7\h\3\q\2\q\5\z\8\c\4\s\i\q\3\9\h\4\m\g\k\9\7\p\p\t\z\j\6\9\q\i\k\w\u\j\1\9\5\k\m\u\4\x\g\7\i\9\g\w\6\e\j\2\3\v\z\j\c\y\6\x\t\p\d\a\e\n\y\2\l\f\3\f\k\z\i\2\r\r\r\6\l\4\7\n\f\a\l\n\8\8\6\9\4\o\l\n\f\b\c\k\r\1\9\7\g\7\l\v\x\l\3\y\x\v\x\v\b\7\b\a\k\2\q\1\e\t\2\a\v\f\z\9\8\2\e\3\t\m\j\9\7\6\r\4\r\w\q\d\c\s\b\7\x\f\9\p\r\1\g\o\z\e\6\a\s\h\v\t\f\w\9\m\b\1\k\e\8\8\9\3\n\l\a\a\r\m\5\b\f\3\a\h\y\r\t\p\f\6\5\f\d\y\y\r\d\e\5\o\t\0\4\6\q\w\c\w\g\a\j\f\u\2\s\a\h\a\u\x\v\3\m\h\f\h\z\b\h\j\n\q\8\v\7\4\5\r\t\l\p\p\w\d\v\k\r\a\k\8\s\k\1\8\8\w\6\4\p\e\z\h\v\4\e\r\b\f\9\1\3\a\6\5\9\i\n\x\1\g\2\v\i\x\e\a\e\v\f\o\x\d\n\c\k\2\o\3\3\x\6\t\v\q\x\7\9\s\u\3\k\n\w\t\5\u\l\s\z\z\w\e\q\p\k\8\w\4\p\y\m\y\e\4\j\y\e\b\n\s\h\1\m\r\0\3\j\t\y\h\1\8\e\n\2\3\8\f\q\k\v\h\1\9\c\g\i\x\0\a\p\6\m\i\b\v\r\e\v\g\k\m\f\j\8\e\g\j\a\j\o\e\t\e\m\l\p\g\6\h\e\o\r\p\s\z\4\2\6\c\x\k\g\3\7\r\4\0\6\i\j\p\8\u\c\6\h\4\c\5\z\e\7\4\y\j\a\j\f\b\0\l\g\0\5\n\x\w\f\z\p\7\q\a\d\q\4\1\5\z\c\l\l\y\i\y\z\j\o\f\w\a\k\e\0\i\z\h\a\r\u\q\p\u\1\8\0\r\s\n\s\p\z\2\v\d\t\c\1\0\h\k\9\s\x\9\f\n\l\n\5\b\7\2\b\k\n\f\7\t\b\7\a\b\a\z\o\s\9\m\j\l\5\k\o\t\2\p\r\r\y\3\m\u\s\i\v\n\e\r\i\a\m\m\0\p\l\c\9\i\s\k\i\z\0\m\7\d\g\o\4\1\3\b\l\e\9\9\b\d\8\a\0\b\b\r\s\5\0\q\s\8\d\l\n\z\m\h\v\f\b\w\g\l\h\e\9\y\o\z\9\t\b\y\6\9\e\l\r\u\8\v\q\d\9\o\a\k\y\f\z\g\u\5\c\e\2\a\2\f\o\l\r\b\d\d\x\g\v\t\c\y\t\n\b\9\k\p\l\n\g\y\y\k\p\0\3\c\l\v\m\1\d\0\v\9\0\n\i\f\c\y\4\t\q\w\a\n\9\c\o\3\7\2\x\r\7\4\6\h\z\0\n\i\0\x\7\n\a\d\f\e\q\1\z\w\g\7\o\t\g\s\c\6\3\q\w\u\t\k\l\2\s\q\t\w\n\v\h\l\k\v\a\9\v\x\t\e\j\8\d\u\r\q\g\5\f\5\l\f\e\s\h\4\2\9\8\0\n\h\e\s\o\2\h\w\b\a\a\f\t\g\5\c\9\c\2\z\n\1\f\5\0\k\4\e\5\i\4\v\a\s\s\u\7\z\c\h\p\n\6\d\d\a\n\z\y\t\6\d\a\o\i\v\1\z\c\v\z\u\f\r\r\2\l\9\v\n\c\x\2\0\5\8\j\a\3\1\p\e\1\p\z\2\i\j\n\9\s\z\y\j\n\n\o\h\e\l\4\8\5\b\s\w\u\5\o\f\9\g\7\h\j\7\p\x\r\n\z\u\b\s\c\u\p\9\t\s\z\e\b\g\6\8\6\g\m\4\6\k\7\4\3\u\d\7\v\u\l\o\t\4\g\x\v\l\x\m\e\9\y\j\c\r\b\w\4\3\6\t\t\9\v\z\1\8\1\5\6\e\t\u\x\l\u\p\t\y\h\k\b\i\y\1\7\d\q\m\r\b\e\z\o\q\i\2\n\5\h\8\l\5\n\y\l\u\p\f\3\8\z\y\k\n\l\a\l\9\y\n\d\j\d\f\k\r\2\m\7\4\s\7\n\s\f\b\5\s\s\b\b\j\v\w\5\a\a\v\2\i\r\1\a\2\6\d\s\n\q\h\s\p\n\j\9\o\n\z\z\b\x\4\f\d\5\5\b\j\l\u\m\7\a\y\3\l\t\9\q\w\t\0\x\y\o\2\0\m\i\d\7\f\d\j\j\w\t\y\c\y\b\s\6\a\h\n\b\c\c\b\g\l\r\n\4\b\d\e\5\4\y\b\m\n\t\2\x\f\7\j\2\7\1\1\4\z\d\8\e\m\k\4\z\r\3\5\e\1\5\p\6\8\m\9\1\1\a\8\j\i\y\p\p\s\n\1\d\b\z\c\8\y\g\5\0\o\3\5\5\9\6\m\w\c\7\g\i\g\v\x\9\8\a\w\i\g\x\t\u\9\h\e\7\q\s\m\1\6\4\q\p\r\c\5\5\t\l\8\t\z\i\6\s\f\s\k\4\k\w\3\a\0\c\4\o\f\0\d\f\d\5\z\o\7\3\7\1\c\2\4\v\1\w\4\a\d\y\7\f\h\7\o\f\g\a\0\a\s\3\6\g\a\e\l\s\a\r\b\i\b\6\a\q\2\9\i\f\d\3\o\g\5\t\d\p\4\g\s\1\z\9\j\u\2\n\g\d\x\o\w\u\n\p\1\0\l\s\5\e\3\u\r\f\e\9\h\o\s\0\w\m\e\v\v\5\n\4\w\q\d\m\c\k\p\n\4\c\a\r\y\r\a\q\i\b\l\c\2\z\1\s\j\q\x\p\5\x\d\m\3\z\8\d\m\t\q\e\q\e\o\x\w\c\z\i\n\2\j\5\3\k\p\m\l\t\o\x\h\h\p\a\w\w\e\x\9\w\s\2\n\e\7\c\f\5\s\p\d\f\z\b\r\h\j\z\w\d\m\t\z\w\6\t\s\g\6\d\x\2\t\4\t\p\l\f\e\x\q\n\p\j\j\p\9\m\e\t\x\d\q\s\j\3\f\f\5\k\f\6\9\8\h\5\r\9\x\x\6\1\g\l\j\5\p\t\2\q\2\a\y\7\x\i\c\p\b\2\k\7\w\5\9\t\5\m\w\2\w\d\5\w\w\g\u\g\9\3\1\q\9\n\a\v\6\s\c\e\t\n\i\c\l\1\y\8\7\m\c\b\l\l\p\7\b\i\2\c\h\z\6\5\f\y\u\9\6\x\o\s\k\x\b\p\r\7\a\p\u\j\f\3\8\z\s\u\c\1\8\k\h\b\h\7\d\a\e\4\8\u\n\4\d\3\p\7\c\z\b\z\e\u\x\8\9\8\b\3\t\2\5\q\z\j\z\c\7\p\8\h\x\h\f\e\a\m\y\f\w\0\q\1\q\s\j\z\8\4\m\7\s\u\0\h\t\b\e\m\k\r\g\d\b\n\x\2\3\n\l\g\o\b\a\a\d\5\u\e\9\t\s\f\8\2\i\l\o\s\b\h\y\i\a\y\z\l\u\z\b\y\b\n\d\4\f\s\3\x\m\l\b\c\p\w\i\e\e\3\z\6\b\e\2\b\z\b\5\h\9\6\2\w\d\m\1\t\g\r\w\q\o\2\j\n\g\y\c\i\z\b\9\z\e\9\4\i\i\v\b\q\n\n\y\b\e\5\1\d\m\c\5\j\r\4\b\0\j\k\c\a\i\p\m\3\s\n\r\5\f\r\j\t\w\l\x\a\p\s\f\z\f\2\7\f\j\4\6\b\6\m\7\j\3\u\m\5\5\n\m\i\u\0\6\t\y\i\8\i\w\s\9\3\1\l\i\i\r\c\q\5\b\r\x\z\1\k\9\l\e\9\4\c\w\i\1\8\i\c\0\m\q\t\b\5\3\i\z\p\x\n\l\v\n\r\t\v\4\e\j\r\c\h\f\2\j\v\o\9\l\s\n\h\w\7\z\d\2\d\o\l\7\o\6\n\p\k\h\k\p\j\z\l\r\d\o\e\x\g\8\1\o\o\9\k\u\m\u\t\g\2\o\a\h\b\t\4\b\5\u\v\q\o\o\0\k\j\a\d\k\g\6\l\b\j\3\t\1\d\3\o\1\v\t\6\r\n\b\s\n\a\v\r\t\a\y\s\j\m\w\o\d\j\e\j\a\m\e\n\x\j\o\7\z\z\8\w\h\s\x\5\p\m\2\n\l\d\z\3\w\l\5\z\m\6\n\0\h\8\2\z\f\j\9\b\x\d\5\1\n\f\w\v\7\c\q\c\u\a\m\h\n\e\0\z\o\w\n\m\q\d\1\f\e\b\v\6\k\9\k\b\5\h\5\s\1\3\k\5\d\s\k\n\k\a\l\0\o\f\c\q\p\z\m\q\t\y\f\m\s\m\q\p\v\e\b\s\r\5\v\q\b\1\r\4\w\z\a\l\z\w\q\7\y\v\m\t\i\k\q\h\n\f\s\w\x\2\0\y\m\7\p\j\r\1\n\t\3\n\m\f\0\y\l\a\d\e\0\d\5\6\v\r\f\9\l\a\v\e\m\h\l\q\w\u\d\9\h\p\d\x\x\h\l\z\l\k\x\u\6\c\1\m\1\4\l\4\s\4\a\3\o\9\r\6\1\2\p\d\6\v\w\o\z\j\x\u\x\0\9\f\8\p\h\9\f\9\y\x\2\j\z\i\q\z\2\x\u\q\v\b\b\b\v\g\p\1\s\y\k\a\v\3\c\f\l\0\w\1\3\d\h\c\f\e\z\x\1\0\f\j\p\7\h\5\y\p\e\6\a\c\4\f\x\5\4\s\x\s\h\a\a\x\d\4\j\e\e\5\b\c\v\g\a\5\o\e\s\l\w\m\a\g\t\q\p\f\j\a\2\w\r\7\i\q\f\0\5\s\5\a\l\3\2\8\u\q\g\w\5\k\l\s\g\2\1\w\q\j\0\7\i\4\o\k\j\3\w\b\p\9\w\o\7\w\j\r\y\5\e\w\5\i\7\t\k\y\2\x\w\1\d\5\3\v\1\8\h\9\m\c\f\z\g\r\9\f\g\z\n\5\p\6\u\s\x\8\o\o\a\g\k\q\p\i\l\e\a\i\m\e\i\i\x\4\p\3\9\h\3\h\k\j\o\d\r\b\t\n\1\x\1\h\k\z\a\0\g\q\p\s\c\n\f\l\8\z\u\t\f\m\3\z\c\a\6\d\u\0\4\9\p\e\b\9\c\c\a\y\p\4\2\b\m\l\b\g\q\5\s\v\x\b\z\q\s\f\f\o\3\0\5\u\v\v\p\4\5\o\g\4\6\v\e\x\s\a\3\g\h\a\r\5\5\f\j\q\h\2\e\n\c\p\m\y\h\h\l\l\i\8\x\2\8\j\4\q\d\h\7\b\f\k\l\k\7\6\s\1\h\v\r\j\j\l\9\0\r\0\b\7\i\k\l\5\5\s\l\r\t\2\t\x\s\q\9\w\1\t\v\m\9\i\9\z\2\n\z\4\5\4\2\7\r\z\f\9\6\5\c\b\k\e\0\3\o\7\a\i\l\e\r\8\y\g\7\5\z\a\k\s\q\h\y\l\u\s\t\h\m\q\d\s\g\5\1\8\k\0\e\x\7\x\n\l\h\l\s\u\x\l\g\m\7\m\0\8\c\9\q\5\q\4\r\9\2\m\9\r\t\r\2\3\8\c\u\r\w\i\8\6\1\9\7\x\g\l\5\g\7\5\f\d\m\v\8\g\v\s\n\v\q\9\5\j\z\a\4\3\l\3\g\3\a\1\z\s\7\2\y\t\y\0\a\r\g\f\i\9\e\a\v\g\c\z\k\d\a\q\t\2\d\m\8\3\h\b\n\c\k\3\s\f\g\a\x\n\4\c\t\e\d\k\x\e\7\a\4\s\9\j\m\3\l\8\u\0\9\x\t\r\g\p\y\g\r\y\d\d\9\9\n\s\z\v\z\v\j\r\j\t\6\o\n\9\b\a\n\y\b\m\4\y\5\h\6\w\9\q\t\2\4\8\7\g\l\r\s\5\l\g\y\a\c\4\7\r\1\c\y\b\7\i\c\p\p\m\j\9\2\t\u\x\n\k\3\e\3\y\m\g\2\m\i\m\j\s\z\h\y\9\s\k\h\8\n\b\7\2\b\x\g\j\s\l\e\b\z\2\u\o\x\q\h\8\7\3\j\6\w\f\t\8\s\5\4\d\3\h\e\i\o\d\n\5\z\x\l\z\d\a\t\7\1\v\g\f\n\c\e\4\q\4\q\w\o\z\o\t\f\h\3\2\6\x\d\7\8\u\n\1\p\o\0\0\o\1\n\6\r\c\8\z\g\h\8\v\o\f\2\d\s\w\4\o\z\g\y\6\2\t\g\k\h\o\y\4\1\g\e\z\e\1\z\7\7\x\c\5\a\q\g\a\l\e\m\b\e\w\s\s\2\5\e\4\d\5\l\9\c\0\l\i\a\s\1\a\q\q\4\o\t\a\8\s\p\9\6\j\t\h\o\2\i\x\o\9\3\a\s\c\m\y\f\l\6\a\r\g\b\f\b\k\a\h\7\3\u\r\e\h\y\q\r\l\j\a\q\e\f\p\y\8\n\8\4\8\z\o\p\k\7\9\h\y\5\6\2\2\p\l\q\1\h\s\3\v\8\s\8\q\y\i\n\b\h\d\6\w\q\9\j\o\n\8\3\7\2\2\e\r\t\8\8\4\8\x\f\k\1\a\7\4\z\c\q\q\u\z\o\5\n\z\t\b\y\e\e\c\5\3\k\6\0\1\p\2\a\4\6\r\r\3\u\7\y\n\6\w\g\t\g\z\y\a\c\6\y\z\t\b\k\7\z\u\1\y\i\3\h\y\k\w\k\0\7\b\2\b\o\0\r\f\q\h\9\5\y\p\4\j\t\6\h\w\v\y\v\8\k\i\k\x\w\9\d\n\o\n\d\f\z\h\g\i\o\p\p\9\d\4\7\w\c\t\z\3\9\v\6\7\h\z\1\r\q\2\g\4\p\o\u\6\n\8\x\y\0\5\w\g\n\z\5\w\z\e\6\d\3\z\n\n\y\7\q\u\4\f\o\z\r\2\d\u\f\7\9\3\6\6\z\q\j\b\2\w\4\o\6\l\p\m\4\b\q\s\v\y\q\1\s\k\n\n\u\a\s\v\d\o\o\q\r\2\f\n\0\o\g\b\i\v\k\s\5\m\r\d\d\w\8\r\p\k\l\x\f\w\v\f\x\7\a\d\7\s\5\e\p\6\3\x\3\3\h\7\2\4\o\o\h\r\1\q\y\i\5\d\z\y\1\1\i\c\r\6\h\3\y\r\6\g\1\r\y\m\a\1\d\e\6\d\v\2\1\t\y\4\2\g\d\5\b\b\q\3\b\e\l\4\p\1\c\o\8\g\i\w\t ]] 00:06:28.043 00:06:28.043 real 0m1.791s 00:06:28.043 user 0m1.284s 00:06:28.043 sys 0m0.795s 00:06:28.043 15:29:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:28.043 15:29:29 -- common/autotest_common.sh@10 -- # set +x 00:06:28.043 15:29:29 -- dd/basic_rw.sh@1 -- # cleanup 00:06:28.043 15:29:29 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:06:28.043 15:29:29 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:28.043 15:29:29 -- dd/common.sh@11 -- # local nvme_ref= 00:06:28.043 15:29:29 -- dd/common.sh@12 -- # local size=0xffff 00:06:28.043 15:29:29 -- dd/common.sh@14 -- # local bs=1048576 00:06:28.043 15:29:29 -- dd/common.sh@15 -- # local count=1 00:06:28.043 15:29:29 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:28.043 15:29:29 -- dd/common.sh@18 -- # gen_conf 00:06:28.043 15:29:29 -- dd/common.sh@31 -- # xtrace_disable 00:06:28.043 15:29:29 -- common/autotest_common.sh@10 -- # set +x 00:06:28.043 [2024-04-17 15:29:29.284961] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:28.043 [2024-04-17 15:29:29.285058] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62737 ] 00:06:28.043 { 00:06:28.043 "subsystems": [ 00:06:28.043 { 00:06:28.043 "subsystem": "bdev", 00:06:28.043 "config": [ 00:06:28.043 { 00:06:28.043 "params": { 00:06:28.043 "trtype": "pcie", 00:06:28.043 "traddr": "0000:00:10.0", 00:06:28.043 "name": "Nvme0" 00:06:28.043 }, 00:06:28.043 "method": "bdev_nvme_attach_controller" 00:06:28.043 }, 00:06:28.043 { 00:06:28.043 "method": "bdev_wait_for_examine" 00:06:28.043 } 00:06:28.043 ] 00:06:28.043 } 00:06:28.043 ] 00:06:28.043 } 00:06:28.043 [2024-04-17 15:29:29.423905] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.302 [2024-04-17 15:29:29.570212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.819  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:28.819 00:06:28.819 15:29:30 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:28.819 ************************************ 00:06:28.819 END TEST spdk_dd_basic_rw 00:06:28.819 ************************************ 00:06:28.819 00:06:28.819 real 0m23.213s 00:06:28.819 user 0m16.857s 00:06:28.819 sys 0m8.997s 00:06:28.819 15:29:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:28.819 15:29:30 -- common/autotest_common.sh@10 -- # set +x 00:06:28.819 15:29:30 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:28.819 15:29:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:28.819 15:29:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:28.819 15:29:30 -- common/autotest_common.sh@10 -- # set +x 00:06:28.819 ************************************ 00:06:28.819 START TEST spdk_dd_posix 00:06:28.819 ************************************ 00:06:28.819 15:29:30 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:29.079 * Looking for test storage... 00:06:29.079 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:29.079 15:29:30 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:29.079 15:29:30 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:29.079 15:29:30 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:29.079 15:29:30 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:29.079 15:29:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.079 15:29:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.079 15:29:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.079 15:29:30 -- paths/export.sh@5 -- # export PATH 00:06:29.079 15:29:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.079 15:29:30 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:06:29.079 15:29:30 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:06:29.079 15:29:30 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:06:29.079 15:29:30 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:06:29.079 15:29:30 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:29.079 15:29:30 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:29.079 15:29:30 -- dd/posix.sh@130 -- # tests 00:06:29.079 15:29:30 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:06:29.079 * First test run, liburing in use 00:06:29.079 15:29:30 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:06:29.079 15:29:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:29.079 15:29:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.079 15:29:30 -- common/autotest_common.sh@10 -- # set +x 00:06:29.079 ************************************ 00:06:29.079 START TEST dd_flag_append 00:06:29.079 ************************************ 00:06:29.079 15:29:30 -- common/autotest_common.sh@1111 -- # append 00:06:29.079 15:29:30 -- dd/posix.sh@16 -- # local dump0 00:06:29.079 15:29:30 -- dd/posix.sh@17 -- # local dump1 00:06:29.079 15:29:30 -- dd/posix.sh@19 -- # gen_bytes 32 00:06:29.079 15:29:30 -- dd/common.sh@98 -- # xtrace_disable 00:06:29.079 15:29:30 -- common/autotest_common.sh@10 -- # set +x 00:06:29.079 15:29:30 -- dd/posix.sh@19 -- # dump0=3cc5nzy3be7oq4z9p6wvlmnhb7gzidz1 00:06:29.079 15:29:30 -- dd/posix.sh@20 -- # gen_bytes 32 00:06:29.079 15:29:30 -- dd/common.sh@98 -- # xtrace_disable 00:06:29.079 15:29:30 -- common/autotest_common.sh@10 -- # set +x 00:06:29.079 15:29:30 -- dd/posix.sh@20 -- # dump1=bqwa4162b1e3w8agwpq9o7naqqyb77d3 00:06:29.079 15:29:30 -- dd/posix.sh@22 -- # printf %s 3cc5nzy3be7oq4z9p6wvlmnhb7gzidz1 00:06:29.079 15:29:30 -- dd/posix.sh@23 -- # printf %s bqwa4162b1e3w8agwpq9o7naqqyb77d3 00:06:29.079 15:29:30 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:29.079 [2024-04-17 15:29:30.461347] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:29.079 [2024-04-17 15:29:30.461465] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62810 ] 00:06:29.348 [2024-04-17 15:29:30.599469] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.348 [2024-04-17 15:29:30.741355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.875  Copying: 32/32 [B] (average 31 kBps) 00:06:29.875 00:06:29.875 15:29:31 -- dd/posix.sh@27 -- # [[ bqwa4162b1e3w8agwpq9o7naqqyb77d33cc5nzy3be7oq4z9p6wvlmnhb7gzidz1 == \b\q\w\a\4\1\6\2\b\1\e\3\w\8\a\g\w\p\q\9\o\7\n\a\q\q\y\b\7\7\d\3\3\c\c\5\n\z\y\3\b\e\7\o\q\4\z\9\p\6\w\v\l\m\n\h\b\7\g\z\i\d\z\1 ]] 00:06:29.875 00:06:29.875 real 0m0.811s 00:06:29.875 user 0m0.514s 00:06:29.875 sys 0m0.366s 00:06:29.875 15:29:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:29.875 15:29:31 -- common/autotest_common.sh@10 -- # set +x 00:06:29.875 ************************************ 00:06:29.875 END TEST dd_flag_append 00:06:29.875 ************************************ 00:06:29.875 15:29:31 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:06:29.875 15:29:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:29.875 15:29:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.875 15:29:31 -- common/autotest_common.sh@10 -- # set +x 00:06:30.133 ************************************ 00:06:30.133 START TEST dd_flag_directory 00:06:30.133 ************************************ 00:06:30.133 15:29:31 -- common/autotest_common.sh@1111 -- # directory 00:06:30.133 15:29:31 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:30.133 15:29:31 -- common/autotest_common.sh@638 -- # local es=0 00:06:30.134 15:29:31 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:30.134 15:29:31 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.134 15:29:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:30.134 15:29:31 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.134 15:29:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:30.134 15:29:31 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.134 15:29:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:30.134 15:29:31 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.134 15:29:31 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:30.134 15:29:31 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:30.134 [2024-04-17 15:29:31.382625] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:30.134 [2024-04-17 15:29:31.383088] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62848 ] 00:06:30.134 [2024-04-17 15:29:31.512979] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.393 [2024-04-17 15:29:31.627776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.393 [2024-04-17 15:29:31.746828] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:30.393 [2024-04-17 15:29:31.746913] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:30.393 [2024-04-17 15:29:31.746932] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:30.652 [2024-04-17 15:29:31.909599] spdk_dd.c:1535:main: *ERROR*: Error occurred while performing copy 00:06:30.652 15:29:32 -- common/autotest_common.sh@641 -- # es=236 00:06:30.652 15:29:32 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:30.652 15:29:32 -- common/autotest_common.sh@650 -- # es=108 00:06:30.652 15:29:32 -- common/autotest_common.sh@651 -- # case "$es" in 00:06:30.652 15:29:32 -- common/autotest_common.sh@658 -- # es=1 00:06:30.652 15:29:32 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:30.652 15:29:32 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:30.652 15:29:32 -- common/autotest_common.sh@638 -- # local es=0 00:06:30.652 15:29:32 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:30.652 15:29:32 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.652 15:29:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:30.652 15:29:32 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.652 15:29:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:30.652 15:29:32 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.652 15:29:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:30.652 15:29:32 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.652 15:29:32 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:30.652 15:29:32 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:30.911 [2024-04-17 15:29:32.144065] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:30.911 [2024-04-17 15:29:32.144171] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62863 ] 00:06:30.911 [2024-04-17 15:29:32.285270] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.170 [2024-04-17 15:29:32.428546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.170 [2024-04-17 15:29:32.547502] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:31.170 [2024-04-17 15:29:32.547580] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:31.170 [2024-04-17 15:29:32.547600] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:31.428 [2024-04-17 15:29:32.712459] spdk_dd.c:1535:main: *ERROR*: Error occurred while performing copy 00:06:31.687 15:29:32 -- common/autotest_common.sh@641 -- # es=236 00:06:31.687 15:29:32 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:31.687 15:29:32 -- common/autotest_common.sh@650 -- # es=108 00:06:31.687 ************************************ 00:06:31.687 END TEST dd_flag_directory 00:06:31.687 ************************************ 00:06:31.687 15:29:32 -- common/autotest_common.sh@651 -- # case "$es" in 00:06:31.687 15:29:32 -- common/autotest_common.sh@658 -- # es=1 00:06:31.687 15:29:32 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:31.687 00:06:31.687 real 0m1.554s 00:06:31.687 user 0m0.961s 00:06:31.687 sys 0m0.379s 00:06:31.687 15:29:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:31.687 15:29:32 -- common/autotest_common.sh@10 -- # set +x 00:06:31.687 15:29:32 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:06:31.687 15:29:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:31.687 15:29:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.687 15:29:32 -- common/autotest_common.sh@10 -- # set +x 00:06:31.687 ************************************ 00:06:31.687 START TEST dd_flag_nofollow 00:06:31.687 ************************************ 00:06:31.687 15:29:32 -- common/autotest_common.sh@1111 -- # nofollow 00:06:31.687 15:29:32 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:31.687 15:29:33 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:31.687 15:29:33 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:31.687 15:29:33 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:31.687 15:29:33 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:31.687 15:29:33 -- common/autotest_common.sh@638 -- # local es=0 00:06:31.687 15:29:33 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:31.687 15:29:33 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:31.687 15:29:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:31.687 15:29:33 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:31.688 15:29:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:31.688 15:29:33 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:31.688 15:29:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:31.688 15:29:33 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:31.688 15:29:33 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:31.688 15:29:33 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:31.688 [2024-04-17 15:29:33.070094] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:31.688 [2024-04-17 15:29:33.070235] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62895 ] 00:06:31.946 [2024-04-17 15:29:33.208650] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.946 [2024-04-17 15:29:33.351822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.205 [2024-04-17 15:29:33.476399] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:32.206 [2024-04-17 15:29:33.476474] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:32.206 [2024-04-17 15:29:33.476494] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:32.206 [2024-04-17 15:29:33.645442] spdk_dd.c:1535:main: *ERROR*: Error occurred while performing copy 00:06:32.465 15:29:33 -- common/autotest_common.sh@641 -- # es=216 00:06:32.465 15:29:33 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:32.465 15:29:33 -- common/autotest_common.sh@650 -- # es=88 00:06:32.465 15:29:33 -- common/autotest_common.sh@651 -- # case "$es" in 00:06:32.465 15:29:33 -- common/autotest_common.sh@658 -- # es=1 00:06:32.465 15:29:33 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:32.465 15:29:33 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:32.465 15:29:33 -- common/autotest_common.sh@638 -- # local es=0 00:06:32.465 15:29:33 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:32.465 15:29:33 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.465 15:29:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:32.465 15:29:33 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.465 15:29:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:32.465 15:29:33 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.465 15:29:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:32.465 15:29:33 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.465 15:29:33 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:32.465 15:29:33 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:32.465 [2024-04-17 15:29:33.890458] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:32.465 [2024-04-17 15:29:33.890602] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62910 ] 00:06:32.724 [2024-04-17 15:29:34.036053] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.988 [2024-04-17 15:29:34.180213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.988 [2024-04-17 15:29:34.300388] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:32.989 [2024-04-17 15:29:34.300471] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:32.989 [2024-04-17 15:29:34.300492] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:33.252 [2024-04-17 15:29:34.464960] spdk_dd.c:1535:main: *ERROR*: Error occurred while performing copy 00:06:33.252 15:29:34 -- common/autotest_common.sh@641 -- # es=216 00:06:33.252 15:29:34 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:33.252 15:29:34 -- common/autotest_common.sh@650 -- # es=88 00:06:33.252 15:29:34 -- common/autotest_common.sh@651 -- # case "$es" in 00:06:33.252 15:29:34 -- common/autotest_common.sh@658 -- # es=1 00:06:33.252 15:29:34 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:33.252 15:29:34 -- dd/posix.sh@46 -- # gen_bytes 512 00:06:33.252 15:29:34 -- dd/common.sh@98 -- # xtrace_disable 00:06:33.252 15:29:34 -- common/autotest_common.sh@10 -- # set +x 00:06:33.252 15:29:34 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:33.510 [2024-04-17 15:29:34.709481] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:33.511 [2024-04-17 15:29:34.709597] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62918 ] 00:06:33.511 [2024-04-17 15:29:34.846536] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.769 [2024-04-17 15:29:34.992060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.029  Copying: 512/512 [B] (average 500 kBps) 00:06:34.029 00:06:34.029 ************************************ 00:06:34.029 END TEST dd_flag_nofollow 00:06:34.029 ************************************ 00:06:34.029 15:29:35 -- dd/posix.sh@49 -- # [[ g9goxkybb4awfzaorhbw30xafgheuohpw4xncx4w0td9hm3xiy3n3f9p24ce9s9t0x73j03qdj0kl5fbxw5f3fquec35262q2507oqaobvhzs8lyit67na4qp87hlbtcr34byzha4bwsbisu46javg7ng8mdvkb0ro985t2q7h8mjyjsbtdykgy3wvereq0i6jqqxut8v8nj779l8exb7n09rg08ij0xouchbmm0iyybbxcq0r3mvn6rfw0kjjqbkvgpsfxrh7uzpjrt9xvl564qqmby0hrwvck12v7ddd0nsp905zcw0qbse5p9a44hs5qo32n7nczsthqr0wsxyfkzwna6ksb9dwrhz3b1e2twrrztt0tbng7w1aixwl48030dwxyi14yrmgwxh5rhl181x0h28jz58ncsb8cmkjibjdb5291rhv0hrug7czzn33upw9v3thj85j9nkl4ma7vr2kywynsmds09zbq0wtbf5v2m9zhvx5my5cace2lw == \g\9\g\o\x\k\y\b\b\4\a\w\f\z\a\o\r\h\b\w\3\0\x\a\f\g\h\e\u\o\h\p\w\4\x\n\c\x\4\w\0\t\d\9\h\m\3\x\i\y\3\n\3\f\9\p\2\4\c\e\9\s\9\t\0\x\7\3\j\0\3\q\d\j\0\k\l\5\f\b\x\w\5\f\3\f\q\u\e\c\3\5\2\6\2\q\2\5\0\7\o\q\a\o\b\v\h\z\s\8\l\y\i\t\6\7\n\a\4\q\p\8\7\h\l\b\t\c\r\3\4\b\y\z\h\a\4\b\w\s\b\i\s\u\4\6\j\a\v\g\7\n\g\8\m\d\v\k\b\0\r\o\9\8\5\t\2\q\7\h\8\m\j\y\j\s\b\t\d\y\k\g\y\3\w\v\e\r\e\q\0\i\6\j\q\q\x\u\t\8\v\8\n\j\7\7\9\l\8\e\x\b\7\n\0\9\r\g\0\8\i\j\0\x\o\u\c\h\b\m\m\0\i\y\y\b\b\x\c\q\0\r\3\m\v\n\6\r\f\w\0\k\j\j\q\b\k\v\g\p\s\f\x\r\h\7\u\z\p\j\r\t\9\x\v\l\5\6\4\q\q\m\b\y\0\h\r\w\v\c\k\1\2\v\7\d\d\d\0\n\s\p\9\0\5\z\c\w\0\q\b\s\e\5\p\9\a\4\4\h\s\5\q\o\3\2\n\7\n\c\z\s\t\h\q\r\0\w\s\x\y\f\k\z\w\n\a\6\k\s\b\9\d\w\r\h\z\3\b\1\e\2\t\w\r\r\z\t\t\0\t\b\n\g\7\w\1\a\i\x\w\l\4\8\0\3\0\d\w\x\y\i\1\4\y\r\m\g\w\x\h\5\r\h\l\1\8\1\x\0\h\2\8\j\z\5\8\n\c\s\b\8\c\m\k\j\i\b\j\d\b\5\2\9\1\r\h\v\0\h\r\u\g\7\c\z\z\n\3\3\u\p\w\9\v\3\t\h\j\8\5\j\9\n\k\l\4\m\a\7\v\r\2\k\y\w\y\n\s\m\d\s\0\9\z\b\q\0\w\t\b\f\5\v\2\m\9\z\h\v\x\5\m\y\5\c\a\c\e\2\l\w ]] 00:06:34.029 00:06:34.029 real 0m2.443s 00:06:34.029 user 0m1.520s 00:06:34.029 sys 0m0.770s 00:06:34.029 15:29:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:34.029 15:29:35 -- common/autotest_common.sh@10 -- # set +x 00:06:34.287 15:29:35 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:06:34.287 15:29:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:34.287 15:29:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:34.287 15:29:35 -- common/autotest_common.sh@10 -- # set +x 00:06:34.287 ************************************ 00:06:34.287 START TEST dd_flag_noatime 00:06:34.287 ************************************ 00:06:34.287 15:29:35 -- common/autotest_common.sh@1111 -- # noatime 00:06:34.287 15:29:35 -- dd/posix.sh@53 -- # local atime_if 00:06:34.287 15:29:35 -- dd/posix.sh@54 -- # local atime_of 00:06:34.287 15:29:35 -- dd/posix.sh@58 -- # gen_bytes 512 00:06:34.287 15:29:35 -- dd/common.sh@98 -- # xtrace_disable 00:06:34.287 15:29:35 -- common/autotest_common.sh@10 -- # set +x 00:06:34.287 15:29:35 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:34.287 15:29:35 -- dd/posix.sh@60 -- # atime_if=1713367775 00:06:34.287 15:29:35 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:34.287 15:29:35 -- dd/posix.sh@61 -- # atime_of=1713367775 00:06:34.287 15:29:35 -- dd/posix.sh@66 -- # sleep 1 00:06:35.223 15:29:36 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:35.223 [2024-04-17 15:29:36.638743] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:35.223 [2024-04-17 15:29:36.638868] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62975 ] 00:06:35.481 [2024-04-17 15:29:36.784842] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.740 [2024-04-17 15:29:36.945213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.999  Copying: 512/512 [B] (average 500 kBps) 00:06:35.999 00:06:35.999 15:29:37 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:35.999 15:29:37 -- dd/posix.sh@69 -- # (( atime_if == 1713367775 )) 00:06:35.999 15:29:37 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:35.999 15:29:37 -- dd/posix.sh@70 -- # (( atime_of == 1713367775 )) 00:06:35.999 15:29:37 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:35.999 [2024-04-17 15:29:37.436674] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:35.999 [2024-04-17 15:29:37.436786] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62994 ] 00:06:36.257 [2024-04-17 15:29:37.569465] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.516 [2024-04-17 15:29:37.715290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.776  Copying: 512/512 [B] (average 500 kBps) 00:06:36.776 00:06:36.776 15:29:38 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:36.776 ************************************ 00:06:36.776 END TEST dd_flag_noatime 00:06:36.776 ************************************ 00:06:36.776 15:29:38 -- dd/posix.sh@73 -- # (( atime_if < 1713367777 )) 00:06:36.776 00:06:36.776 real 0m2.611s 00:06:36.776 user 0m1.007s 00:06:36.776 sys 0m0.724s 00:06:36.776 15:29:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:36.776 15:29:38 -- common/autotest_common.sh@10 -- # set +x 00:06:36.776 15:29:38 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:06:36.776 15:29:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:36.776 15:29:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:36.776 15:29:38 -- common/autotest_common.sh@10 -- # set +x 00:06:37.054 ************************************ 00:06:37.054 START TEST dd_flags_misc 00:06:37.054 ************************************ 00:06:37.054 15:29:38 -- common/autotest_common.sh@1111 -- # io 00:06:37.054 15:29:38 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:37.054 15:29:38 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:37.054 15:29:38 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:37.054 15:29:38 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:37.054 15:29:38 -- dd/posix.sh@86 -- # gen_bytes 512 00:06:37.054 15:29:38 -- dd/common.sh@98 -- # xtrace_disable 00:06:37.054 15:29:38 -- common/autotest_common.sh@10 -- # set +x 00:06:37.054 15:29:38 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:37.054 15:29:38 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:37.054 [2024-04-17 15:29:38.347444] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:37.054 [2024-04-17 15:29:38.347556] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63032 ] 00:06:37.331 [2024-04-17 15:29:38.488036] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.331 [2024-04-17 15:29:38.633099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.848  Copying: 512/512 [B] (average 500 kBps) 00:06:37.848 00:06:37.848 15:29:39 -- dd/posix.sh@93 -- # [[ hr4v0vl6mxhlznhfj6jxzi44bg7ktq52aivoroqwyldq2svtyi41j8326zw0b7856xwe9amee1fovla6g68urkl0xypx7b2q3h5ja00m92kaiac3q0z6mkbj75fojr1bih0mkai2cv5349kgd802x4ghb4orujp5ptb41guclgf7tbzdwwhm66j2lklv7ks41nal9opmgytfc6hbb5um443v7tgdsybo4g7uwnkokezew2rdnvbefx1ggotvj426mlnym47rcykxp2qovwdf36ohbrxau1ob9u78lyr0zs6a4f4s8k325ldger97smyz25rmrrkejbymhi4p65cvioe2m50mbg57odfclwqojqkh1u9m0wdbubgfsy5fukqru6kjw9uk5cv5t8gfwkybs4khk6qsvu99vbaxs7lkhh2iwbxcxze2ldjf7g56oz8rpfskhyikepkj7o39nwhqdjyo35n4gjr8pfku86dzwto66z03lu32ef9l3phtb6fw == \h\r\4\v\0\v\l\6\m\x\h\l\z\n\h\f\j\6\j\x\z\i\4\4\b\g\7\k\t\q\5\2\a\i\v\o\r\o\q\w\y\l\d\q\2\s\v\t\y\i\4\1\j\8\3\2\6\z\w\0\b\7\8\5\6\x\w\e\9\a\m\e\e\1\f\o\v\l\a\6\g\6\8\u\r\k\l\0\x\y\p\x\7\b\2\q\3\h\5\j\a\0\0\m\9\2\k\a\i\a\c\3\q\0\z\6\m\k\b\j\7\5\f\o\j\r\1\b\i\h\0\m\k\a\i\2\c\v\5\3\4\9\k\g\d\8\0\2\x\4\g\h\b\4\o\r\u\j\p\5\p\t\b\4\1\g\u\c\l\g\f\7\t\b\z\d\w\w\h\m\6\6\j\2\l\k\l\v\7\k\s\4\1\n\a\l\9\o\p\m\g\y\t\f\c\6\h\b\b\5\u\m\4\4\3\v\7\t\g\d\s\y\b\o\4\g\7\u\w\n\k\o\k\e\z\e\w\2\r\d\n\v\b\e\f\x\1\g\g\o\t\v\j\4\2\6\m\l\n\y\m\4\7\r\c\y\k\x\p\2\q\o\v\w\d\f\3\6\o\h\b\r\x\a\u\1\o\b\9\u\7\8\l\y\r\0\z\s\6\a\4\f\4\s\8\k\3\2\5\l\d\g\e\r\9\7\s\m\y\z\2\5\r\m\r\r\k\e\j\b\y\m\h\i\4\p\6\5\c\v\i\o\e\2\m\5\0\m\b\g\5\7\o\d\f\c\l\w\q\o\j\q\k\h\1\u\9\m\0\w\d\b\u\b\g\f\s\y\5\f\u\k\q\r\u\6\k\j\w\9\u\k\5\c\v\5\t\8\g\f\w\k\y\b\s\4\k\h\k\6\q\s\v\u\9\9\v\b\a\x\s\7\l\k\h\h\2\i\w\b\x\c\x\z\e\2\l\d\j\f\7\g\5\6\o\z\8\r\p\f\s\k\h\y\i\k\e\p\k\j\7\o\3\9\n\w\h\q\d\j\y\o\3\5\n\4\g\j\r\8\p\f\k\u\8\6\d\z\w\t\o\6\6\z\0\3\l\u\3\2\e\f\9\l\3\p\h\t\b\6\f\w ]] 00:06:37.848 15:29:39 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:37.848 15:29:39 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:37.848 [2024-04-17 15:29:39.159932] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:37.848 [2024-04-17 15:29:39.160067] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63041 ] 00:06:38.106 [2024-04-17 15:29:39.298442] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.106 [2024-04-17 15:29:39.441144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.623  Copying: 512/512 [B] (average 500 kBps) 00:06:38.623 00:06:38.623 15:29:39 -- dd/posix.sh@93 -- # [[ hr4v0vl6mxhlznhfj6jxzi44bg7ktq52aivoroqwyldq2svtyi41j8326zw0b7856xwe9amee1fovla6g68urkl0xypx7b2q3h5ja00m92kaiac3q0z6mkbj75fojr1bih0mkai2cv5349kgd802x4ghb4orujp5ptb41guclgf7tbzdwwhm66j2lklv7ks41nal9opmgytfc6hbb5um443v7tgdsybo4g7uwnkokezew2rdnvbefx1ggotvj426mlnym47rcykxp2qovwdf36ohbrxau1ob9u78lyr0zs6a4f4s8k325ldger97smyz25rmrrkejbymhi4p65cvioe2m50mbg57odfclwqojqkh1u9m0wdbubgfsy5fukqru6kjw9uk5cv5t8gfwkybs4khk6qsvu99vbaxs7lkhh2iwbxcxze2ldjf7g56oz8rpfskhyikepkj7o39nwhqdjyo35n4gjr8pfku86dzwto66z03lu32ef9l3phtb6fw == \h\r\4\v\0\v\l\6\m\x\h\l\z\n\h\f\j\6\j\x\z\i\4\4\b\g\7\k\t\q\5\2\a\i\v\o\r\o\q\w\y\l\d\q\2\s\v\t\y\i\4\1\j\8\3\2\6\z\w\0\b\7\8\5\6\x\w\e\9\a\m\e\e\1\f\o\v\l\a\6\g\6\8\u\r\k\l\0\x\y\p\x\7\b\2\q\3\h\5\j\a\0\0\m\9\2\k\a\i\a\c\3\q\0\z\6\m\k\b\j\7\5\f\o\j\r\1\b\i\h\0\m\k\a\i\2\c\v\5\3\4\9\k\g\d\8\0\2\x\4\g\h\b\4\o\r\u\j\p\5\p\t\b\4\1\g\u\c\l\g\f\7\t\b\z\d\w\w\h\m\6\6\j\2\l\k\l\v\7\k\s\4\1\n\a\l\9\o\p\m\g\y\t\f\c\6\h\b\b\5\u\m\4\4\3\v\7\t\g\d\s\y\b\o\4\g\7\u\w\n\k\o\k\e\z\e\w\2\r\d\n\v\b\e\f\x\1\g\g\o\t\v\j\4\2\6\m\l\n\y\m\4\7\r\c\y\k\x\p\2\q\o\v\w\d\f\3\6\o\h\b\r\x\a\u\1\o\b\9\u\7\8\l\y\r\0\z\s\6\a\4\f\4\s\8\k\3\2\5\l\d\g\e\r\9\7\s\m\y\z\2\5\r\m\r\r\k\e\j\b\y\m\h\i\4\p\6\5\c\v\i\o\e\2\m\5\0\m\b\g\5\7\o\d\f\c\l\w\q\o\j\q\k\h\1\u\9\m\0\w\d\b\u\b\g\f\s\y\5\f\u\k\q\r\u\6\k\j\w\9\u\k\5\c\v\5\t\8\g\f\w\k\y\b\s\4\k\h\k\6\q\s\v\u\9\9\v\b\a\x\s\7\l\k\h\h\2\i\w\b\x\c\x\z\e\2\l\d\j\f\7\g\5\6\o\z\8\r\p\f\s\k\h\y\i\k\e\p\k\j\7\o\3\9\n\w\h\q\d\j\y\o\3\5\n\4\g\j\r\8\p\f\k\u\8\6\d\z\w\t\o\6\6\z\0\3\l\u\3\2\e\f\9\l\3\p\h\t\b\6\f\w ]] 00:06:38.623 15:29:39 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:38.623 15:29:39 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:38.623 [2024-04-17 15:29:39.935063] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:38.623 [2024-04-17 15:29:39.935172] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63051 ] 00:06:38.882 [2024-04-17 15:29:40.066977] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.882 [2024-04-17 15:29:40.217638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.399  Copying: 512/512 [B] (average 250 kBps) 00:06:39.399 00:06:39.399 15:29:40 -- dd/posix.sh@93 -- # [[ hr4v0vl6mxhlznhfj6jxzi44bg7ktq52aivoroqwyldq2svtyi41j8326zw0b7856xwe9amee1fovla6g68urkl0xypx7b2q3h5ja00m92kaiac3q0z6mkbj75fojr1bih0mkai2cv5349kgd802x4ghb4orujp5ptb41guclgf7tbzdwwhm66j2lklv7ks41nal9opmgytfc6hbb5um443v7tgdsybo4g7uwnkokezew2rdnvbefx1ggotvj426mlnym47rcykxp2qovwdf36ohbrxau1ob9u78lyr0zs6a4f4s8k325ldger97smyz25rmrrkejbymhi4p65cvioe2m50mbg57odfclwqojqkh1u9m0wdbubgfsy5fukqru6kjw9uk5cv5t8gfwkybs4khk6qsvu99vbaxs7lkhh2iwbxcxze2ldjf7g56oz8rpfskhyikepkj7o39nwhqdjyo35n4gjr8pfku86dzwto66z03lu32ef9l3phtb6fw == \h\r\4\v\0\v\l\6\m\x\h\l\z\n\h\f\j\6\j\x\z\i\4\4\b\g\7\k\t\q\5\2\a\i\v\o\r\o\q\w\y\l\d\q\2\s\v\t\y\i\4\1\j\8\3\2\6\z\w\0\b\7\8\5\6\x\w\e\9\a\m\e\e\1\f\o\v\l\a\6\g\6\8\u\r\k\l\0\x\y\p\x\7\b\2\q\3\h\5\j\a\0\0\m\9\2\k\a\i\a\c\3\q\0\z\6\m\k\b\j\7\5\f\o\j\r\1\b\i\h\0\m\k\a\i\2\c\v\5\3\4\9\k\g\d\8\0\2\x\4\g\h\b\4\o\r\u\j\p\5\p\t\b\4\1\g\u\c\l\g\f\7\t\b\z\d\w\w\h\m\6\6\j\2\l\k\l\v\7\k\s\4\1\n\a\l\9\o\p\m\g\y\t\f\c\6\h\b\b\5\u\m\4\4\3\v\7\t\g\d\s\y\b\o\4\g\7\u\w\n\k\o\k\e\z\e\w\2\r\d\n\v\b\e\f\x\1\g\g\o\t\v\j\4\2\6\m\l\n\y\m\4\7\r\c\y\k\x\p\2\q\o\v\w\d\f\3\6\o\h\b\r\x\a\u\1\o\b\9\u\7\8\l\y\r\0\z\s\6\a\4\f\4\s\8\k\3\2\5\l\d\g\e\r\9\7\s\m\y\z\2\5\r\m\r\r\k\e\j\b\y\m\h\i\4\p\6\5\c\v\i\o\e\2\m\5\0\m\b\g\5\7\o\d\f\c\l\w\q\o\j\q\k\h\1\u\9\m\0\w\d\b\u\b\g\f\s\y\5\f\u\k\q\r\u\6\k\j\w\9\u\k\5\c\v\5\t\8\g\f\w\k\y\b\s\4\k\h\k\6\q\s\v\u\9\9\v\b\a\x\s\7\l\k\h\h\2\i\w\b\x\c\x\z\e\2\l\d\j\f\7\g\5\6\o\z\8\r\p\f\s\k\h\y\i\k\e\p\k\j\7\o\3\9\n\w\h\q\d\j\y\o\3\5\n\4\g\j\r\8\p\f\k\u\8\6\d\z\w\t\o\6\6\z\0\3\l\u\3\2\e\f\9\l\3\p\h\t\b\6\f\w ]] 00:06:39.399 15:29:40 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:39.399 15:29:40 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:39.399 [2024-04-17 15:29:40.707583] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:39.399 [2024-04-17 15:29:40.707705] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63066 ] 00:06:39.658 [2024-04-17 15:29:40.847414] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.658 [2024-04-17 15:29:40.981075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.916  Copying: 512/512 [B] (average 166 kBps) 00:06:39.916 00:06:39.916 15:29:41 -- dd/posix.sh@93 -- # [[ hr4v0vl6mxhlznhfj6jxzi44bg7ktq52aivoroqwyldq2svtyi41j8326zw0b7856xwe9amee1fovla6g68urkl0xypx7b2q3h5ja00m92kaiac3q0z6mkbj75fojr1bih0mkai2cv5349kgd802x4ghb4orujp5ptb41guclgf7tbzdwwhm66j2lklv7ks41nal9opmgytfc6hbb5um443v7tgdsybo4g7uwnkokezew2rdnvbefx1ggotvj426mlnym47rcykxp2qovwdf36ohbrxau1ob9u78lyr0zs6a4f4s8k325ldger97smyz25rmrrkejbymhi4p65cvioe2m50mbg57odfclwqojqkh1u9m0wdbubgfsy5fukqru6kjw9uk5cv5t8gfwkybs4khk6qsvu99vbaxs7lkhh2iwbxcxze2ldjf7g56oz8rpfskhyikepkj7o39nwhqdjyo35n4gjr8pfku86dzwto66z03lu32ef9l3phtb6fw == \h\r\4\v\0\v\l\6\m\x\h\l\z\n\h\f\j\6\j\x\z\i\4\4\b\g\7\k\t\q\5\2\a\i\v\o\r\o\q\w\y\l\d\q\2\s\v\t\y\i\4\1\j\8\3\2\6\z\w\0\b\7\8\5\6\x\w\e\9\a\m\e\e\1\f\o\v\l\a\6\g\6\8\u\r\k\l\0\x\y\p\x\7\b\2\q\3\h\5\j\a\0\0\m\9\2\k\a\i\a\c\3\q\0\z\6\m\k\b\j\7\5\f\o\j\r\1\b\i\h\0\m\k\a\i\2\c\v\5\3\4\9\k\g\d\8\0\2\x\4\g\h\b\4\o\r\u\j\p\5\p\t\b\4\1\g\u\c\l\g\f\7\t\b\z\d\w\w\h\m\6\6\j\2\l\k\l\v\7\k\s\4\1\n\a\l\9\o\p\m\g\y\t\f\c\6\h\b\b\5\u\m\4\4\3\v\7\t\g\d\s\y\b\o\4\g\7\u\w\n\k\o\k\e\z\e\w\2\r\d\n\v\b\e\f\x\1\g\g\o\t\v\j\4\2\6\m\l\n\y\m\4\7\r\c\y\k\x\p\2\q\o\v\w\d\f\3\6\o\h\b\r\x\a\u\1\o\b\9\u\7\8\l\y\r\0\z\s\6\a\4\f\4\s\8\k\3\2\5\l\d\g\e\r\9\7\s\m\y\z\2\5\r\m\r\r\k\e\j\b\y\m\h\i\4\p\6\5\c\v\i\o\e\2\m\5\0\m\b\g\5\7\o\d\f\c\l\w\q\o\j\q\k\h\1\u\9\m\0\w\d\b\u\b\g\f\s\y\5\f\u\k\q\r\u\6\k\j\w\9\u\k\5\c\v\5\t\8\g\f\w\k\y\b\s\4\k\h\k\6\q\s\v\u\9\9\v\b\a\x\s\7\l\k\h\h\2\i\w\b\x\c\x\z\e\2\l\d\j\f\7\g\5\6\o\z\8\r\p\f\s\k\h\y\i\k\e\p\k\j\7\o\3\9\n\w\h\q\d\j\y\o\3\5\n\4\g\j\r\8\p\f\k\u\8\6\d\z\w\t\o\6\6\z\0\3\l\u\3\2\e\f\9\l\3\p\h\t\b\6\f\w ]] 00:06:39.916 15:29:41 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:39.916 15:29:41 -- dd/posix.sh@86 -- # gen_bytes 512 00:06:39.916 15:29:41 -- dd/common.sh@98 -- # xtrace_disable 00:06:39.916 15:29:41 -- common/autotest_common.sh@10 -- # set +x 00:06:39.916 15:29:41 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:39.916 15:29:41 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:40.175 [2024-04-17 15:29:41.383090] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:40.175 [2024-04-17 15:29:41.383214] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63081 ] 00:06:40.175 [2024-04-17 15:29:41.521806] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.433 [2024-04-17 15:29:41.638495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.691  Copying: 512/512 [B] (average 500 kBps) 00:06:40.691 00:06:40.691 15:29:41 -- dd/posix.sh@93 -- # [[ h5ei9nm0iie9fymd9be09fc237v876cnuthhqatw2610ygybec45loenl0zb4rxofj4y38863nn5s6xev0e06dog10eu3qvtnwb095q6oent0xmehfcbanwr78xjtrn8ucj5pgvfycscyhxoid87q9od7yqfkcr83a3lj4j1dcyf412zilljcz3vsf4dse9eqo5na1ayaw08amxpqdysaafasag2ntxxywa42ftme43f6dc5kvsyj4jquvgbu38mamddp89alz75l76m3nymfxsor35jshn7rksoiy977sjrwokon1ggf60ano31q481fw0mno9t12r87e57k9ibdtqln45j2fl608v0lrpun6kbw2vkniiy84kj9qivysbj0o3mwtzd5ve2gfepzpq1s5uio2r3pw4fbvfp1d8tv0k0gijexz3r3p9dv0pvbh0f5vssseep2dwhg2v42habpg098xcv6l1ar71q6wx666zpa5tl5t67lm0iybli0xm2 == \h\5\e\i\9\n\m\0\i\i\e\9\f\y\m\d\9\b\e\0\9\f\c\2\3\7\v\8\7\6\c\n\u\t\h\h\q\a\t\w\2\6\1\0\y\g\y\b\e\c\4\5\l\o\e\n\l\0\z\b\4\r\x\o\f\j\4\y\3\8\8\6\3\n\n\5\s\6\x\e\v\0\e\0\6\d\o\g\1\0\e\u\3\q\v\t\n\w\b\0\9\5\q\6\o\e\n\t\0\x\m\e\h\f\c\b\a\n\w\r\7\8\x\j\t\r\n\8\u\c\j\5\p\g\v\f\y\c\s\c\y\h\x\o\i\d\8\7\q\9\o\d\7\y\q\f\k\c\r\8\3\a\3\l\j\4\j\1\d\c\y\f\4\1\2\z\i\l\l\j\c\z\3\v\s\f\4\d\s\e\9\e\q\o\5\n\a\1\a\y\a\w\0\8\a\m\x\p\q\d\y\s\a\a\f\a\s\a\g\2\n\t\x\x\y\w\a\4\2\f\t\m\e\4\3\f\6\d\c\5\k\v\s\y\j\4\j\q\u\v\g\b\u\3\8\m\a\m\d\d\p\8\9\a\l\z\7\5\l\7\6\m\3\n\y\m\f\x\s\o\r\3\5\j\s\h\n\7\r\k\s\o\i\y\9\7\7\s\j\r\w\o\k\o\n\1\g\g\f\6\0\a\n\o\3\1\q\4\8\1\f\w\0\m\n\o\9\t\1\2\r\8\7\e\5\7\k\9\i\b\d\t\q\l\n\4\5\j\2\f\l\6\0\8\v\0\l\r\p\u\n\6\k\b\w\2\v\k\n\i\i\y\8\4\k\j\9\q\i\v\y\s\b\j\0\o\3\m\w\t\z\d\5\v\e\2\g\f\e\p\z\p\q\1\s\5\u\i\o\2\r\3\p\w\4\f\b\v\f\p\1\d\8\t\v\0\k\0\g\i\j\e\x\z\3\r\3\p\9\d\v\0\p\v\b\h\0\f\5\v\s\s\s\e\e\p\2\d\w\h\g\2\v\4\2\h\a\b\p\g\0\9\8\x\c\v\6\l\1\a\r\7\1\q\6\w\x\6\6\6\z\p\a\5\t\l\5\t\6\7\l\m\0\i\y\b\l\i\0\x\m\2 ]] 00:06:40.691 15:29:41 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:40.691 15:29:41 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:40.691 [2024-04-17 15:29:42.027870] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:40.691 [2024-04-17 15:29:42.028007] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63085 ] 00:06:40.949 [2024-04-17 15:29:42.167316] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.949 [2024-04-17 15:29:42.299734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.208  Copying: 512/512 [B] (average 500 kBps) 00:06:41.208 00:06:41.208 15:29:42 -- dd/posix.sh@93 -- # [[ h5ei9nm0iie9fymd9be09fc237v876cnuthhqatw2610ygybec45loenl0zb4rxofj4y38863nn5s6xev0e06dog10eu3qvtnwb095q6oent0xmehfcbanwr78xjtrn8ucj5pgvfycscyhxoid87q9od7yqfkcr83a3lj4j1dcyf412zilljcz3vsf4dse9eqo5na1ayaw08amxpqdysaafasag2ntxxywa42ftme43f6dc5kvsyj4jquvgbu38mamddp89alz75l76m3nymfxsor35jshn7rksoiy977sjrwokon1ggf60ano31q481fw0mno9t12r87e57k9ibdtqln45j2fl608v0lrpun6kbw2vkniiy84kj9qivysbj0o3mwtzd5ve2gfepzpq1s5uio2r3pw4fbvfp1d8tv0k0gijexz3r3p9dv0pvbh0f5vssseep2dwhg2v42habpg098xcv6l1ar71q6wx666zpa5tl5t67lm0iybli0xm2 == \h\5\e\i\9\n\m\0\i\i\e\9\f\y\m\d\9\b\e\0\9\f\c\2\3\7\v\8\7\6\c\n\u\t\h\h\q\a\t\w\2\6\1\0\y\g\y\b\e\c\4\5\l\o\e\n\l\0\z\b\4\r\x\o\f\j\4\y\3\8\8\6\3\n\n\5\s\6\x\e\v\0\e\0\6\d\o\g\1\0\e\u\3\q\v\t\n\w\b\0\9\5\q\6\o\e\n\t\0\x\m\e\h\f\c\b\a\n\w\r\7\8\x\j\t\r\n\8\u\c\j\5\p\g\v\f\y\c\s\c\y\h\x\o\i\d\8\7\q\9\o\d\7\y\q\f\k\c\r\8\3\a\3\l\j\4\j\1\d\c\y\f\4\1\2\z\i\l\l\j\c\z\3\v\s\f\4\d\s\e\9\e\q\o\5\n\a\1\a\y\a\w\0\8\a\m\x\p\q\d\y\s\a\a\f\a\s\a\g\2\n\t\x\x\y\w\a\4\2\f\t\m\e\4\3\f\6\d\c\5\k\v\s\y\j\4\j\q\u\v\g\b\u\3\8\m\a\m\d\d\p\8\9\a\l\z\7\5\l\7\6\m\3\n\y\m\f\x\s\o\r\3\5\j\s\h\n\7\r\k\s\o\i\y\9\7\7\s\j\r\w\o\k\o\n\1\g\g\f\6\0\a\n\o\3\1\q\4\8\1\f\w\0\m\n\o\9\t\1\2\r\8\7\e\5\7\k\9\i\b\d\t\q\l\n\4\5\j\2\f\l\6\0\8\v\0\l\r\p\u\n\6\k\b\w\2\v\k\n\i\i\y\8\4\k\j\9\q\i\v\y\s\b\j\0\o\3\m\w\t\z\d\5\v\e\2\g\f\e\p\z\p\q\1\s\5\u\i\o\2\r\3\p\w\4\f\b\v\f\p\1\d\8\t\v\0\k\0\g\i\j\e\x\z\3\r\3\p\9\d\v\0\p\v\b\h\0\f\5\v\s\s\s\e\e\p\2\d\w\h\g\2\v\4\2\h\a\b\p\g\0\9\8\x\c\v\6\l\1\a\r\7\1\q\6\w\x\6\6\6\z\p\a\5\t\l\5\t\6\7\l\m\0\i\y\b\l\i\0\x\m\2 ]] 00:06:41.208 15:29:42 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:41.208 15:29:42 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:41.466 [2024-04-17 15:29:42.711850] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:41.466 [2024-04-17 15:29:42.712017] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63100 ] 00:06:41.466 [2024-04-17 15:29:42.854725] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.725 [2024-04-17 15:29:42.966401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.985  Copying: 512/512 [B] (average 250 kBps) 00:06:41.985 00:06:41.985 15:29:43 -- dd/posix.sh@93 -- # [[ h5ei9nm0iie9fymd9be09fc237v876cnuthhqatw2610ygybec45loenl0zb4rxofj4y38863nn5s6xev0e06dog10eu3qvtnwb095q6oent0xmehfcbanwr78xjtrn8ucj5pgvfycscyhxoid87q9od7yqfkcr83a3lj4j1dcyf412zilljcz3vsf4dse9eqo5na1ayaw08amxpqdysaafasag2ntxxywa42ftme43f6dc5kvsyj4jquvgbu38mamddp89alz75l76m3nymfxsor35jshn7rksoiy977sjrwokon1ggf60ano31q481fw0mno9t12r87e57k9ibdtqln45j2fl608v0lrpun6kbw2vkniiy84kj9qivysbj0o3mwtzd5ve2gfepzpq1s5uio2r3pw4fbvfp1d8tv0k0gijexz3r3p9dv0pvbh0f5vssseep2dwhg2v42habpg098xcv6l1ar71q6wx666zpa5tl5t67lm0iybli0xm2 == \h\5\e\i\9\n\m\0\i\i\e\9\f\y\m\d\9\b\e\0\9\f\c\2\3\7\v\8\7\6\c\n\u\t\h\h\q\a\t\w\2\6\1\0\y\g\y\b\e\c\4\5\l\o\e\n\l\0\z\b\4\r\x\o\f\j\4\y\3\8\8\6\3\n\n\5\s\6\x\e\v\0\e\0\6\d\o\g\1\0\e\u\3\q\v\t\n\w\b\0\9\5\q\6\o\e\n\t\0\x\m\e\h\f\c\b\a\n\w\r\7\8\x\j\t\r\n\8\u\c\j\5\p\g\v\f\y\c\s\c\y\h\x\o\i\d\8\7\q\9\o\d\7\y\q\f\k\c\r\8\3\a\3\l\j\4\j\1\d\c\y\f\4\1\2\z\i\l\l\j\c\z\3\v\s\f\4\d\s\e\9\e\q\o\5\n\a\1\a\y\a\w\0\8\a\m\x\p\q\d\y\s\a\a\f\a\s\a\g\2\n\t\x\x\y\w\a\4\2\f\t\m\e\4\3\f\6\d\c\5\k\v\s\y\j\4\j\q\u\v\g\b\u\3\8\m\a\m\d\d\p\8\9\a\l\z\7\5\l\7\6\m\3\n\y\m\f\x\s\o\r\3\5\j\s\h\n\7\r\k\s\o\i\y\9\7\7\s\j\r\w\o\k\o\n\1\g\g\f\6\0\a\n\o\3\1\q\4\8\1\f\w\0\m\n\o\9\t\1\2\r\8\7\e\5\7\k\9\i\b\d\t\q\l\n\4\5\j\2\f\l\6\0\8\v\0\l\r\p\u\n\6\k\b\w\2\v\k\n\i\i\y\8\4\k\j\9\q\i\v\y\s\b\j\0\o\3\m\w\t\z\d\5\v\e\2\g\f\e\p\z\p\q\1\s\5\u\i\o\2\r\3\p\w\4\f\b\v\f\p\1\d\8\t\v\0\k\0\g\i\j\e\x\z\3\r\3\p\9\d\v\0\p\v\b\h\0\f\5\v\s\s\s\e\e\p\2\d\w\h\g\2\v\4\2\h\a\b\p\g\0\9\8\x\c\v\6\l\1\a\r\7\1\q\6\w\x\6\6\6\z\p\a\5\t\l\5\t\6\7\l\m\0\i\y\b\l\i\0\x\m\2 ]] 00:06:41.985 15:29:43 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:41.985 15:29:43 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:41.985 [2024-04-17 15:29:43.347991] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:41.985 [2024-04-17 15:29:43.348072] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63115 ] 00:06:42.244 [2024-04-17 15:29:43.479081] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.244 [2024-04-17 15:29:43.593907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.503  Copying: 512/512 [B] (average 166 kBps) 00:06:42.503 00:06:42.503 15:29:43 -- dd/posix.sh@93 -- # [[ h5ei9nm0iie9fymd9be09fc237v876cnuthhqatw2610ygybec45loenl0zb4rxofj4y38863nn5s6xev0e06dog10eu3qvtnwb095q6oent0xmehfcbanwr78xjtrn8ucj5pgvfycscyhxoid87q9od7yqfkcr83a3lj4j1dcyf412zilljcz3vsf4dse9eqo5na1ayaw08amxpqdysaafasag2ntxxywa42ftme43f6dc5kvsyj4jquvgbu38mamddp89alz75l76m3nymfxsor35jshn7rksoiy977sjrwokon1ggf60ano31q481fw0mno9t12r87e57k9ibdtqln45j2fl608v0lrpun6kbw2vkniiy84kj9qivysbj0o3mwtzd5ve2gfepzpq1s5uio2r3pw4fbvfp1d8tv0k0gijexz3r3p9dv0pvbh0f5vssseep2dwhg2v42habpg098xcv6l1ar71q6wx666zpa5tl5t67lm0iybli0xm2 == \h\5\e\i\9\n\m\0\i\i\e\9\f\y\m\d\9\b\e\0\9\f\c\2\3\7\v\8\7\6\c\n\u\t\h\h\q\a\t\w\2\6\1\0\y\g\y\b\e\c\4\5\l\o\e\n\l\0\z\b\4\r\x\o\f\j\4\y\3\8\8\6\3\n\n\5\s\6\x\e\v\0\e\0\6\d\o\g\1\0\e\u\3\q\v\t\n\w\b\0\9\5\q\6\o\e\n\t\0\x\m\e\h\f\c\b\a\n\w\r\7\8\x\j\t\r\n\8\u\c\j\5\p\g\v\f\y\c\s\c\y\h\x\o\i\d\8\7\q\9\o\d\7\y\q\f\k\c\r\8\3\a\3\l\j\4\j\1\d\c\y\f\4\1\2\z\i\l\l\j\c\z\3\v\s\f\4\d\s\e\9\e\q\o\5\n\a\1\a\y\a\w\0\8\a\m\x\p\q\d\y\s\a\a\f\a\s\a\g\2\n\t\x\x\y\w\a\4\2\f\t\m\e\4\3\f\6\d\c\5\k\v\s\y\j\4\j\q\u\v\g\b\u\3\8\m\a\m\d\d\p\8\9\a\l\z\7\5\l\7\6\m\3\n\y\m\f\x\s\o\r\3\5\j\s\h\n\7\r\k\s\o\i\y\9\7\7\s\j\r\w\o\k\o\n\1\g\g\f\6\0\a\n\o\3\1\q\4\8\1\f\w\0\m\n\o\9\t\1\2\r\8\7\e\5\7\k\9\i\b\d\t\q\l\n\4\5\j\2\f\l\6\0\8\v\0\l\r\p\u\n\6\k\b\w\2\v\k\n\i\i\y\8\4\k\j\9\q\i\v\y\s\b\j\0\o\3\m\w\t\z\d\5\v\e\2\g\f\e\p\z\p\q\1\s\5\u\i\o\2\r\3\p\w\4\f\b\v\f\p\1\d\8\t\v\0\k\0\g\i\j\e\x\z\3\r\3\p\9\d\v\0\p\v\b\h\0\f\5\v\s\s\s\e\e\p\2\d\w\h\g\2\v\4\2\h\a\b\p\g\0\9\8\x\c\v\6\l\1\a\r\7\1\q\6\w\x\6\6\6\z\p\a\5\t\l\5\t\6\7\l\m\0\i\y\b\l\i\0\x\m\2 ]] 00:06:42.503 00:06:42.503 real 0m5.639s 00:06:42.503 user 0m3.447s 00:06:42.503 sys 0m2.502s 00:06:42.503 15:29:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:42.503 ************************************ 00:06:42.503 END TEST dd_flags_misc 00:06:42.503 ************************************ 00:06:42.503 15:29:43 -- common/autotest_common.sh@10 -- # set +x 00:06:42.762 15:29:43 -- dd/posix.sh@131 -- # tests_forced_aio 00:06:42.762 15:29:43 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:06:42.762 * Second test run, disabling liburing, forcing AIO 00:06:42.762 15:29:43 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:06:42.762 15:29:43 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:06:42.762 15:29:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:42.762 15:29:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:42.762 15:29:43 -- common/autotest_common.sh@10 -- # set +x 00:06:42.762 ************************************ 00:06:42.762 START TEST dd_flag_append_forced_aio 00:06:42.762 ************************************ 00:06:42.762 15:29:44 -- common/autotest_common.sh@1111 -- # append 00:06:42.762 15:29:44 -- dd/posix.sh@16 -- # local dump0 00:06:42.762 15:29:44 -- dd/posix.sh@17 -- # local dump1 00:06:42.762 15:29:44 -- dd/posix.sh@19 -- # gen_bytes 32 00:06:42.762 15:29:44 -- dd/common.sh@98 -- # xtrace_disable 00:06:42.762 15:29:44 -- common/autotest_common.sh@10 -- # set +x 00:06:42.762 15:29:44 -- dd/posix.sh@19 -- # dump0=gy90sdrpstd6wxu2ddk7ty9shq6k9tpj 00:06:42.762 15:29:44 -- dd/posix.sh@20 -- # gen_bytes 32 00:06:42.762 15:29:44 -- dd/common.sh@98 -- # xtrace_disable 00:06:42.762 15:29:44 -- common/autotest_common.sh@10 -- # set +x 00:06:42.762 15:29:44 -- dd/posix.sh@20 -- # dump1=jrmmykznf9obkzlq7pzcczlz96czca0j 00:06:42.762 15:29:44 -- dd/posix.sh@22 -- # printf %s gy90sdrpstd6wxu2ddk7ty9shq6k9tpj 00:06:42.762 15:29:44 -- dd/posix.sh@23 -- # printf %s jrmmykznf9obkzlq7pzcczlz96czca0j 00:06:42.762 15:29:44 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:42.762 [2024-04-17 15:29:44.113949] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:42.762 [2024-04-17 15:29:44.114067] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63142 ] 00:06:43.030 [2024-04-17 15:29:44.252876] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.030 [2024-04-17 15:29:44.368109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.289  Copying: 32/32 [B] (average 31 kBps) 00:06:43.289 00:06:43.289 15:29:44 -- dd/posix.sh@27 -- # [[ jrmmykznf9obkzlq7pzcczlz96czca0jgy90sdrpstd6wxu2ddk7ty9shq6k9tpj == \j\r\m\m\y\k\z\n\f\9\o\b\k\z\l\q\7\p\z\c\c\z\l\z\9\6\c\z\c\a\0\j\g\y\9\0\s\d\r\p\s\t\d\6\w\x\u\2\d\d\k\7\t\y\9\s\h\q\6\k\9\t\p\j ]] 00:06:43.289 00:06:43.289 real 0m0.663s 00:06:43.289 user 0m0.382s 00:06:43.289 sys 0m0.156s 00:06:43.289 15:29:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:43.289 ************************************ 00:06:43.289 END TEST dd_flag_append_forced_aio 00:06:43.289 ************************************ 00:06:43.289 15:29:44 -- common/autotest_common.sh@10 -- # set +x 00:06:43.574 15:29:44 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:06:43.574 15:29:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:43.574 15:29:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:43.574 15:29:44 -- common/autotest_common.sh@10 -- # set +x 00:06:43.574 ************************************ 00:06:43.574 START TEST dd_flag_directory_forced_aio 00:06:43.574 ************************************ 00:06:43.574 15:29:44 -- common/autotest_common.sh@1111 -- # directory 00:06:43.574 15:29:44 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:43.574 15:29:44 -- common/autotest_common.sh@638 -- # local es=0 00:06:43.574 15:29:44 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:43.574 15:29:44 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.574 15:29:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:43.574 15:29:44 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.574 15:29:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:43.574 15:29:44 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.574 15:29:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:43.574 15:29:44 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.574 15:29:44 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:43.574 15:29:44 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:43.574 [2024-04-17 15:29:44.885996] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:43.574 [2024-04-17 15:29:44.886064] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63181 ] 00:06:43.839 [2024-04-17 15:29:45.021122] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.839 [2024-04-17 15:29:45.133024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.839 [2024-04-17 15:29:45.232724] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:43.839 [2024-04-17 15:29:45.232792] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:43.839 [2024-04-17 15:29:45.232818] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:44.098 [2024-04-17 15:29:45.365307] spdk_dd.c:1535:main: *ERROR*: Error occurred while performing copy 00:06:44.098 15:29:45 -- common/autotest_common.sh@641 -- # es=236 00:06:44.098 15:29:45 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:44.098 15:29:45 -- common/autotest_common.sh@650 -- # es=108 00:06:44.098 15:29:45 -- common/autotest_common.sh@651 -- # case "$es" in 00:06:44.098 15:29:45 -- common/autotest_common.sh@658 -- # es=1 00:06:44.098 15:29:45 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:44.098 15:29:45 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:44.098 15:29:45 -- common/autotest_common.sh@638 -- # local es=0 00:06:44.098 15:29:45 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:44.098 15:29:45 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.098 15:29:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:44.098 15:29:45 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.098 15:29:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:44.098 15:29:45 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.098 15:29:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:44.098 15:29:45 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.098 15:29:45 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:44.098 15:29:45 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:44.357 [2024-04-17 15:29:45.558143] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:44.357 [2024-04-17 15:29:45.558256] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63196 ] 00:06:44.357 [2024-04-17 15:29:45.699869] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.616 [2024-04-17 15:29:45.816335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.616 [2024-04-17 15:29:45.907291] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:44.616 [2024-04-17 15:29:45.907343] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:44.616 [2024-04-17 15:29:45.907368] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:44.616 [2024-04-17 15:29:46.022807] spdk_dd.c:1535:main: *ERROR*: Error occurred while performing copy 00:06:44.875 15:29:46 -- common/autotest_common.sh@641 -- # es=236 00:06:44.875 15:29:46 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:44.875 15:29:46 -- common/autotest_common.sh@650 -- # es=108 00:06:44.875 15:29:46 -- common/autotest_common.sh@651 -- # case "$es" in 00:06:44.875 15:29:46 -- common/autotest_common.sh@658 -- # es=1 00:06:44.875 15:29:46 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:44.875 00:06:44.875 real 0m1.309s 00:06:44.875 user 0m0.779s 00:06:44.876 sys 0m0.318s 00:06:44.876 15:29:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:44.876 15:29:46 -- common/autotest_common.sh@10 -- # set +x 00:06:44.876 ************************************ 00:06:44.876 END TEST dd_flag_directory_forced_aio 00:06:44.876 ************************************ 00:06:44.876 15:29:46 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:06:44.876 15:29:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:44.876 15:29:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:44.876 15:29:46 -- common/autotest_common.sh@10 -- # set +x 00:06:44.876 ************************************ 00:06:44.876 START TEST dd_flag_nofollow_forced_aio 00:06:44.876 ************************************ 00:06:44.876 15:29:46 -- common/autotest_common.sh@1111 -- # nofollow 00:06:44.876 15:29:46 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:44.876 15:29:46 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:44.876 15:29:46 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:44.876 15:29:46 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:44.876 15:29:46 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:44.876 15:29:46 -- common/autotest_common.sh@638 -- # local es=0 00:06:44.876 15:29:46 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:44.876 15:29:46 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.876 15:29:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:44.876 15:29:46 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.876 15:29:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:44.876 15:29:46 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.876 15:29:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:44.876 15:29:46 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.876 15:29:46 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:44.876 15:29:46 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:45.135 [2024-04-17 15:29:46.318365] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:45.135 [2024-04-17 15:29:46.318489] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63223 ] 00:06:45.135 [2024-04-17 15:29:46.458855] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.394 [2024-04-17 15:29:46.586609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.394 [2024-04-17 15:29:46.674506] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:45.394 [2024-04-17 15:29:46.674562] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:45.394 [2024-04-17 15:29:46.674576] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:45.394 [2024-04-17 15:29:46.787891] spdk_dd.c:1535:main: *ERROR*: Error occurred while performing copy 00:06:45.653 15:29:46 -- common/autotest_common.sh@641 -- # es=216 00:06:45.653 15:29:46 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:45.653 15:29:46 -- common/autotest_common.sh@650 -- # es=88 00:06:45.653 15:29:46 -- common/autotest_common.sh@651 -- # case "$es" in 00:06:45.653 15:29:46 -- common/autotest_common.sh@658 -- # es=1 00:06:45.653 15:29:46 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:45.653 15:29:46 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:45.653 15:29:46 -- common/autotest_common.sh@638 -- # local es=0 00:06:45.653 15:29:46 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:45.653 15:29:46 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.653 15:29:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:45.653 15:29:46 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.653 15:29:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:45.653 15:29:46 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.653 15:29:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:45.653 15:29:46 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.653 15:29:46 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:45.653 15:29:46 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:45.653 [2024-04-17 15:29:46.969416] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:45.653 [2024-04-17 15:29:46.969528] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63238 ] 00:06:45.911 [2024-04-17 15:29:47.107860] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.912 [2024-04-17 15:29:47.228037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.912 [2024-04-17 15:29:47.322337] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:45.912 [2024-04-17 15:29:47.322392] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:45.912 [2024-04-17 15:29:47.322408] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:46.170 [2024-04-17 15:29:47.442034] spdk_dd.c:1535:main: *ERROR*: Error occurred while performing copy 00:06:46.170 15:29:47 -- common/autotest_common.sh@641 -- # es=216 00:06:46.170 15:29:47 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:46.170 15:29:47 -- common/autotest_common.sh@650 -- # es=88 00:06:46.170 15:29:47 -- common/autotest_common.sh@651 -- # case "$es" in 00:06:46.170 15:29:47 -- common/autotest_common.sh@658 -- # es=1 00:06:46.170 15:29:47 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:46.170 15:29:47 -- dd/posix.sh@46 -- # gen_bytes 512 00:06:46.170 15:29:47 -- dd/common.sh@98 -- # xtrace_disable 00:06:46.170 15:29:47 -- common/autotest_common.sh@10 -- # set +x 00:06:46.170 15:29:47 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:46.429 [2024-04-17 15:29:47.640242] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:46.429 [2024-04-17 15:29:47.640371] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63251 ] 00:06:46.429 [2024-04-17 15:29:47.780187] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.687 [2024-04-17 15:29:47.910426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.946  Copying: 512/512 [B] (average 500 kBps) 00:06:46.946 00:06:46.946 15:29:48 -- dd/posix.sh@49 -- # [[ esrgex02j2gbobuadvpi9v3irvum6vx6garlgex5gm51crk0y2j180to0v4shlqjpaeaewb8939eqpmreekux0nt33t2ry19z6cjtgrhl6dwu7ufbipd2s7evnegq709bsgbngsaqt9gtucntjk4678k1epubhckd6zsmpl0l769libhrs4va28zny2iejf239r835miawsota392jhzpxxqi18zji0k5uylzrwsy8lz478dcw8fpz19evwygdkwk5xic4hp2t5edx4mbd18t9cfiurv3d7kq1qucgc18l2qoiudcgqlcl7etsyjrmgce9uytbd7bwvs8dyb5ircafe84vrtn54ekiwwxl7omy74bm3v6q7vf1s5l6wjt93oi74vtesrdov1mb97sf9pcy5hr8cncbizbsebmoccimz55eyq3l4p1fhulbyno91jila0xsx1uododry7admnnyva61c5c2x57n6ixfcljy4a73w7o35ppgb6iulwvbaw == \e\s\r\g\e\x\0\2\j\2\g\b\o\b\u\a\d\v\p\i\9\v\3\i\r\v\u\m\6\v\x\6\g\a\r\l\g\e\x\5\g\m\5\1\c\r\k\0\y\2\j\1\8\0\t\o\0\v\4\s\h\l\q\j\p\a\e\a\e\w\b\8\9\3\9\e\q\p\m\r\e\e\k\u\x\0\n\t\3\3\t\2\r\y\1\9\z\6\c\j\t\g\r\h\l\6\d\w\u\7\u\f\b\i\p\d\2\s\7\e\v\n\e\g\q\7\0\9\b\s\g\b\n\g\s\a\q\t\9\g\t\u\c\n\t\j\k\4\6\7\8\k\1\e\p\u\b\h\c\k\d\6\z\s\m\p\l\0\l\7\6\9\l\i\b\h\r\s\4\v\a\2\8\z\n\y\2\i\e\j\f\2\3\9\r\8\3\5\m\i\a\w\s\o\t\a\3\9\2\j\h\z\p\x\x\q\i\1\8\z\j\i\0\k\5\u\y\l\z\r\w\s\y\8\l\z\4\7\8\d\c\w\8\f\p\z\1\9\e\v\w\y\g\d\k\w\k\5\x\i\c\4\h\p\2\t\5\e\d\x\4\m\b\d\1\8\t\9\c\f\i\u\r\v\3\d\7\k\q\1\q\u\c\g\c\1\8\l\2\q\o\i\u\d\c\g\q\l\c\l\7\e\t\s\y\j\r\m\g\c\e\9\u\y\t\b\d\7\b\w\v\s\8\d\y\b\5\i\r\c\a\f\e\8\4\v\r\t\n\5\4\e\k\i\w\w\x\l\7\o\m\y\7\4\b\m\3\v\6\q\7\v\f\1\s\5\l\6\w\j\t\9\3\o\i\7\4\v\t\e\s\r\d\o\v\1\m\b\9\7\s\f\9\p\c\y\5\h\r\8\c\n\c\b\i\z\b\s\e\b\m\o\c\c\i\m\z\5\5\e\y\q\3\l\4\p\1\f\h\u\l\b\y\n\o\9\1\j\i\l\a\0\x\s\x\1\u\o\d\o\d\r\y\7\a\d\m\n\n\y\v\a\6\1\c\5\c\2\x\5\7\n\6\i\x\f\c\l\j\y\4\a\7\3\w\7\o\3\5\p\p\g\b\6\i\u\l\w\v\b\a\w ]] 00:06:46.946 00:06:46.946 real 0m2.014s 00:06:46.946 user 0m1.205s 00:06:46.946 sys 0m0.469s 00:06:46.946 15:29:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:46.946 ************************************ 00:06:46.946 END TEST dd_flag_nofollow_forced_aio 00:06:46.946 ************************************ 00:06:46.946 15:29:48 -- common/autotest_common.sh@10 -- # set +x 00:06:46.946 15:29:48 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:06:46.946 15:29:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:46.946 15:29:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:46.946 15:29:48 -- common/autotest_common.sh@10 -- # set +x 00:06:47.204 ************************************ 00:06:47.204 START TEST dd_flag_noatime_forced_aio 00:06:47.204 ************************************ 00:06:47.204 15:29:48 -- common/autotest_common.sh@1111 -- # noatime 00:06:47.204 15:29:48 -- dd/posix.sh@53 -- # local atime_if 00:06:47.204 15:29:48 -- dd/posix.sh@54 -- # local atime_of 00:06:47.204 15:29:48 -- dd/posix.sh@58 -- # gen_bytes 512 00:06:47.204 15:29:48 -- dd/common.sh@98 -- # xtrace_disable 00:06:47.204 15:29:48 -- common/autotest_common.sh@10 -- # set +x 00:06:47.204 15:29:48 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:47.204 15:29:48 -- dd/posix.sh@60 -- # atime_if=1713367788 00:06:47.204 15:29:48 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:47.204 15:29:48 -- dd/posix.sh@61 -- # atime_of=1713367788 00:06:47.204 15:29:48 -- dd/posix.sh@66 -- # sleep 1 00:06:48.140 15:29:49 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:48.141 [2024-04-17 15:29:49.477458] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:48.141 [2024-04-17 15:29:49.477552] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63295 ] 00:06:48.399 [2024-04-17 15:29:49.614125] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.399 [2024-04-17 15:29:49.725947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.658  Copying: 512/512 [B] (average 500 kBps) 00:06:48.658 00:06:48.658 15:29:50 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:48.658 15:29:50 -- dd/posix.sh@69 -- # (( atime_if == 1713367788 )) 00:06:48.658 15:29:50 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:48.658 15:29:50 -- dd/posix.sh@70 -- # (( atime_of == 1713367788 )) 00:06:48.658 15:29:50 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:48.917 [2024-04-17 15:29:50.143059] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:48.917 [2024-04-17 15:29:50.143199] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63307 ] 00:06:48.917 [2024-04-17 15:29:50.287373] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.185 [2024-04-17 15:29:50.401979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.444  Copying: 512/512 [B] (average 500 kBps) 00:06:49.444 00:06:49.444 15:29:50 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:49.444 ************************************ 00:06:49.444 END TEST dd_flag_noatime_forced_aio 00:06:49.444 ************************************ 00:06:49.444 15:29:50 -- dd/posix.sh@73 -- # (( atime_if < 1713367790 )) 00:06:49.444 00:06:49.444 real 0m2.373s 00:06:49.444 user 0m0.806s 00:06:49.444 sys 0m0.322s 00:06:49.444 15:29:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:49.444 15:29:50 -- common/autotest_common.sh@10 -- # set +x 00:06:49.444 15:29:50 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:06:49.444 15:29:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:49.444 15:29:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:49.444 15:29:50 -- common/autotest_common.sh@10 -- # set +x 00:06:49.702 ************************************ 00:06:49.702 START TEST dd_flags_misc_forced_aio 00:06:49.702 ************************************ 00:06:49.702 15:29:50 -- common/autotest_common.sh@1111 -- # io 00:06:49.702 15:29:50 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:49.702 15:29:50 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:49.702 15:29:50 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:49.702 15:29:50 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:49.702 15:29:50 -- dd/posix.sh@86 -- # gen_bytes 512 00:06:49.702 15:29:50 -- dd/common.sh@98 -- # xtrace_disable 00:06:49.702 15:29:50 -- common/autotest_common.sh@10 -- # set +x 00:06:49.702 15:29:50 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:49.702 15:29:50 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:49.702 [2024-04-17 15:29:50.954634] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:49.702 [2024-04-17 15:29:50.954770] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63343 ] 00:06:49.702 [2024-04-17 15:29:51.093132] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.961 [2024-04-17 15:29:51.212677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.220  Copying: 512/512 [B] (average 500 kBps) 00:06:50.220 00:06:50.220 15:29:51 -- dd/posix.sh@93 -- # [[ 42400yhbnzh279xt36t68c27gic690ouf139506e0qthj8ylnqgbvjnu2rmecrl5b2sasm08d13ijvub7jshrl24elr96mwcvra4igmkt6qjn6llp6hfh8uic6z1dpd3f10yq5mtm9t032v3n7ubhvurprl3cymybm6papg58noj793v4gr6kq4q6rlavrurs8ktpollisuu8nu9vdradd1kc4lvuuhijtq37kf52guhdy3e9mvfwt0gg1utrlu1nyj1oexov1yblsw4ysfynzhv8rfmioxw98xpkr4zvrcsntzqc5hw7yz48eia3oyzgnzelkbenzpxtrhi1o2d3pte4vaa46xn7drr2rertg7i405bl1eivegze53z97dqes7f70n6lqhzca825krwq4p7sj42wcja3yfl1t2y486fc3uqe1c9ytqx1fncvyctpg1wptvby3gjj6jem3hrozzy8gih4s0l1ieuj7au8jklwhb6nztsz3w36f64ohvn == \4\2\4\0\0\y\h\b\n\z\h\2\7\9\x\t\3\6\t\6\8\c\2\7\g\i\c\6\9\0\o\u\f\1\3\9\5\0\6\e\0\q\t\h\j\8\y\l\n\q\g\b\v\j\n\u\2\r\m\e\c\r\l\5\b\2\s\a\s\m\0\8\d\1\3\i\j\v\u\b\7\j\s\h\r\l\2\4\e\l\r\9\6\m\w\c\v\r\a\4\i\g\m\k\t\6\q\j\n\6\l\l\p\6\h\f\h\8\u\i\c\6\z\1\d\p\d\3\f\1\0\y\q\5\m\t\m\9\t\0\3\2\v\3\n\7\u\b\h\v\u\r\p\r\l\3\c\y\m\y\b\m\6\p\a\p\g\5\8\n\o\j\7\9\3\v\4\g\r\6\k\q\4\q\6\r\l\a\v\r\u\r\s\8\k\t\p\o\l\l\i\s\u\u\8\n\u\9\v\d\r\a\d\d\1\k\c\4\l\v\u\u\h\i\j\t\q\3\7\k\f\5\2\g\u\h\d\y\3\e\9\m\v\f\w\t\0\g\g\1\u\t\r\l\u\1\n\y\j\1\o\e\x\o\v\1\y\b\l\s\w\4\y\s\f\y\n\z\h\v\8\r\f\m\i\o\x\w\9\8\x\p\k\r\4\z\v\r\c\s\n\t\z\q\c\5\h\w\7\y\z\4\8\e\i\a\3\o\y\z\g\n\z\e\l\k\b\e\n\z\p\x\t\r\h\i\1\o\2\d\3\p\t\e\4\v\a\a\4\6\x\n\7\d\r\r\2\r\e\r\t\g\7\i\4\0\5\b\l\1\e\i\v\e\g\z\e\5\3\z\9\7\d\q\e\s\7\f\7\0\n\6\l\q\h\z\c\a\8\2\5\k\r\w\q\4\p\7\s\j\4\2\w\c\j\a\3\y\f\l\1\t\2\y\4\8\6\f\c\3\u\q\e\1\c\9\y\t\q\x\1\f\n\c\v\y\c\t\p\g\1\w\p\t\v\b\y\3\g\j\j\6\j\e\m\3\h\r\o\z\z\y\8\g\i\h\4\s\0\l\1\i\e\u\j\7\a\u\8\j\k\l\w\h\b\6\n\z\t\s\z\3\w\3\6\f\6\4\o\h\v\n ]] 00:06:50.220 15:29:51 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:50.220 15:29:51 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:50.220 [2024-04-17 15:29:51.604739] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:50.220 [2024-04-17 15:29:51.604859] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63351 ] 00:06:50.478 [2024-04-17 15:29:51.740588] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.478 [2024-04-17 15:29:51.858613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.995  Copying: 512/512 [B] (average 500 kBps) 00:06:50.995 00:06:50.996 15:29:52 -- dd/posix.sh@93 -- # [[ 42400yhbnzh279xt36t68c27gic690ouf139506e0qthj8ylnqgbvjnu2rmecrl5b2sasm08d13ijvub7jshrl24elr96mwcvra4igmkt6qjn6llp6hfh8uic6z1dpd3f10yq5mtm9t032v3n7ubhvurprl3cymybm6papg58noj793v4gr6kq4q6rlavrurs8ktpollisuu8nu9vdradd1kc4lvuuhijtq37kf52guhdy3e9mvfwt0gg1utrlu1nyj1oexov1yblsw4ysfynzhv8rfmioxw98xpkr4zvrcsntzqc5hw7yz48eia3oyzgnzelkbenzpxtrhi1o2d3pte4vaa46xn7drr2rertg7i405bl1eivegze53z97dqes7f70n6lqhzca825krwq4p7sj42wcja3yfl1t2y486fc3uqe1c9ytqx1fncvyctpg1wptvby3gjj6jem3hrozzy8gih4s0l1ieuj7au8jklwhb6nztsz3w36f64ohvn == \4\2\4\0\0\y\h\b\n\z\h\2\7\9\x\t\3\6\t\6\8\c\2\7\g\i\c\6\9\0\o\u\f\1\3\9\5\0\6\e\0\q\t\h\j\8\y\l\n\q\g\b\v\j\n\u\2\r\m\e\c\r\l\5\b\2\s\a\s\m\0\8\d\1\3\i\j\v\u\b\7\j\s\h\r\l\2\4\e\l\r\9\6\m\w\c\v\r\a\4\i\g\m\k\t\6\q\j\n\6\l\l\p\6\h\f\h\8\u\i\c\6\z\1\d\p\d\3\f\1\0\y\q\5\m\t\m\9\t\0\3\2\v\3\n\7\u\b\h\v\u\r\p\r\l\3\c\y\m\y\b\m\6\p\a\p\g\5\8\n\o\j\7\9\3\v\4\g\r\6\k\q\4\q\6\r\l\a\v\r\u\r\s\8\k\t\p\o\l\l\i\s\u\u\8\n\u\9\v\d\r\a\d\d\1\k\c\4\l\v\u\u\h\i\j\t\q\3\7\k\f\5\2\g\u\h\d\y\3\e\9\m\v\f\w\t\0\g\g\1\u\t\r\l\u\1\n\y\j\1\o\e\x\o\v\1\y\b\l\s\w\4\y\s\f\y\n\z\h\v\8\r\f\m\i\o\x\w\9\8\x\p\k\r\4\z\v\r\c\s\n\t\z\q\c\5\h\w\7\y\z\4\8\e\i\a\3\o\y\z\g\n\z\e\l\k\b\e\n\z\p\x\t\r\h\i\1\o\2\d\3\p\t\e\4\v\a\a\4\6\x\n\7\d\r\r\2\r\e\r\t\g\7\i\4\0\5\b\l\1\e\i\v\e\g\z\e\5\3\z\9\7\d\q\e\s\7\f\7\0\n\6\l\q\h\z\c\a\8\2\5\k\r\w\q\4\p\7\s\j\4\2\w\c\j\a\3\y\f\l\1\t\2\y\4\8\6\f\c\3\u\q\e\1\c\9\y\t\q\x\1\f\n\c\v\y\c\t\p\g\1\w\p\t\v\b\y\3\g\j\j\6\j\e\m\3\h\r\o\z\z\y\8\g\i\h\4\s\0\l\1\i\e\u\j\7\a\u\8\j\k\l\w\h\b\6\n\z\t\s\z\3\w\3\6\f\6\4\o\h\v\n ]] 00:06:50.996 15:29:52 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:50.996 15:29:52 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:50.996 [2024-04-17 15:29:52.246989] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:50.996 [2024-04-17 15:29:52.247096] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63358 ] 00:06:50.996 [2024-04-17 15:29:52.380910] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.255 [2024-04-17 15:29:52.486307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.513  Copying: 512/512 [B] (average 166 kBps) 00:06:51.513 00:06:51.513 15:29:52 -- dd/posix.sh@93 -- # [[ 42400yhbnzh279xt36t68c27gic690ouf139506e0qthj8ylnqgbvjnu2rmecrl5b2sasm08d13ijvub7jshrl24elr96mwcvra4igmkt6qjn6llp6hfh8uic6z1dpd3f10yq5mtm9t032v3n7ubhvurprl3cymybm6papg58noj793v4gr6kq4q6rlavrurs8ktpollisuu8nu9vdradd1kc4lvuuhijtq37kf52guhdy3e9mvfwt0gg1utrlu1nyj1oexov1yblsw4ysfynzhv8rfmioxw98xpkr4zvrcsntzqc5hw7yz48eia3oyzgnzelkbenzpxtrhi1o2d3pte4vaa46xn7drr2rertg7i405bl1eivegze53z97dqes7f70n6lqhzca825krwq4p7sj42wcja3yfl1t2y486fc3uqe1c9ytqx1fncvyctpg1wptvby3gjj6jem3hrozzy8gih4s0l1ieuj7au8jklwhb6nztsz3w36f64ohvn == \4\2\4\0\0\y\h\b\n\z\h\2\7\9\x\t\3\6\t\6\8\c\2\7\g\i\c\6\9\0\o\u\f\1\3\9\5\0\6\e\0\q\t\h\j\8\y\l\n\q\g\b\v\j\n\u\2\r\m\e\c\r\l\5\b\2\s\a\s\m\0\8\d\1\3\i\j\v\u\b\7\j\s\h\r\l\2\4\e\l\r\9\6\m\w\c\v\r\a\4\i\g\m\k\t\6\q\j\n\6\l\l\p\6\h\f\h\8\u\i\c\6\z\1\d\p\d\3\f\1\0\y\q\5\m\t\m\9\t\0\3\2\v\3\n\7\u\b\h\v\u\r\p\r\l\3\c\y\m\y\b\m\6\p\a\p\g\5\8\n\o\j\7\9\3\v\4\g\r\6\k\q\4\q\6\r\l\a\v\r\u\r\s\8\k\t\p\o\l\l\i\s\u\u\8\n\u\9\v\d\r\a\d\d\1\k\c\4\l\v\u\u\h\i\j\t\q\3\7\k\f\5\2\g\u\h\d\y\3\e\9\m\v\f\w\t\0\g\g\1\u\t\r\l\u\1\n\y\j\1\o\e\x\o\v\1\y\b\l\s\w\4\y\s\f\y\n\z\h\v\8\r\f\m\i\o\x\w\9\8\x\p\k\r\4\z\v\r\c\s\n\t\z\q\c\5\h\w\7\y\z\4\8\e\i\a\3\o\y\z\g\n\z\e\l\k\b\e\n\z\p\x\t\r\h\i\1\o\2\d\3\p\t\e\4\v\a\a\4\6\x\n\7\d\r\r\2\r\e\r\t\g\7\i\4\0\5\b\l\1\e\i\v\e\g\z\e\5\3\z\9\7\d\q\e\s\7\f\7\0\n\6\l\q\h\z\c\a\8\2\5\k\r\w\q\4\p\7\s\j\4\2\w\c\j\a\3\y\f\l\1\t\2\y\4\8\6\f\c\3\u\q\e\1\c\9\y\t\q\x\1\f\n\c\v\y\c\t\p\g\1\w\p\t\v\b\y\3\g\j\j\6\j\e\m\3\h\r\o\z\z\y\8\g\i\h\4\s\0\l\1\i\e\u\j\7\a\u\8\j\k\l\w\h\b\6\n\z\t\s\z\3\w\3\6\f\6\4\o\h\v\n ]] 00:06:51.513 15:29:52 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:51.513 15:29:52 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:51.513 [2024-04-17 15:29:52.880472] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:51.513 [2024-04-17 15:29:52.880589] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63371 ] 00:06:51.772 [2024-04-17 15:29:53.016811] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.772 [2024-04-17 15:29:53.111359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.031  Copying: 512/512 [B] (average 250 kBps) 00:06:52.031 00:06:52.031 15:29:53 -- dd/posix.sh@93 -- # [[ 42400yhbnzh279xt36t68c27gic690ouf139506e0qthj8ylnqgbvjnu2rmecrl5b2sasm08d13ijvub7jshrl24elr96mwcvra4igmkt6qjn6llp6hfh8uic6z1dpd3f10yq5mtm9t032v3n7ubhvurprl3cymybm6papg58noj793v4gr6kq4q6rlavrurs8ktpollisuu8nu9vdradd1kc4lvuuhijtq37kf52guhdy3e9mvfwt0gg1utrlu1nyj1oexov1yblsw4ysfynzhv8rfmioxw98xpkr4zvrcsntzqc5hw7yz48eia3oyzgnzelkbenzpxtrhi1o2d3pte4vaa46xn7drr2rertg7i405bl1eivegze53z97dqes7f70n6lqhzca825krwq4p7sj42wcja3yfl1t2y486fc3uqe1c9ytqx1fncvyctpg1wptvby3gjj6jem3hrozzy8gih4s0l1ieuj7au8jklwhb6nztsz3w36f64ohvn == \4\2\4\0\0\y\h\b\n\z\h\2\7\9\x\t\3\6\t\6\8\c\2\7\g\i\c\6\9\0\o\u\f\1\3\9\5\0\6\e\0\q\t\h\j\8\y\l\n\q\g\b\v\j\n\u\2\r\m\e\c\r\l\5\b\2\s\a\s\m\0\8\d\1\3\i\j\v\u\b\7\j\s\h\r\l\2\4\e\l\r\9\6\m\w\c\v\r\a\4\i\g\m\k\t\6\q\j\n\6\l\l\p\6\h\f\h\8\u\i\c\6\z\1\d\p\d\3\f\1\0\y\q\5\m\t\m\9\t\0\3\2\v\3\n\7\u\b\h\v\u\r\p\r\l\3\c\y\m\y\b\m\6\p\a\p\g\5\8\n\o\j\7\9\3\v\4\g\r\6\k\q\4\q\6\r\l\a\v\r\u\r\s\8\k\t\p\o\l\l\i\s\u\u\8\n\u\9\v\d\r\a\d\d\1\k\c\4\l\v\u\u\h\i\j\t\q\3\7\k\f\5\2\g\u\h\d\y\3\e\9\m\v\f\w\t\0\g\g\1\u\t\r\l\u\1\n\y\j\1\o\e\x\o\v\1\y\b\l\s\w\4\y\s\f\y\n\z\h\v\8\r\f\m\i\o\x\w\9\8\x\p\k\r\4\z\v\r\c\s\n\t\z\q\c\5\h\w\7\y\z\4\8\e\i\a\3\o\y\z\g\n\z\e\l\k\b\e\n\z\p\x\t\r\h\i\1\o\2\d\3\p\t\e\4\v\a\a\4\6\x\n\7\d\r\r\2\r\e\r\t\g\7\i\4\0\5\b\l\1\e\i\v\e\g\z\e\5\3\z\9\7\d\q\e\s\7\f\7\0\n\6\l\q\h\z\c\a\8\2\5\k\r\w\q\4\p\7\s\j\4\2\w\c\j\a\3\y\f\l\1\t\2\y\4\8\6\f\c\3\u\q\e\1\c\9\y\t\q\x\1\f\n\c\v\y\c\t\p\g\1\w\p\t\v\b\y\3\g\j\j\6\j\e\m\3\h\r\o\z\z\y\8\g\i\h\4\s\0\l\1\i\e\u\j\7\a\u\8\j\k\l\w\h\b\6\n\z\t\s\z\3\w\3\6\f\6\4\o\h\v\n ]] 00:06:52.031 15:29:53 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:52.031 15:29:53 -- dd/posix.sh@86 -- # gen_bytes 512 00:06:52.031 15:29:53 -- dd/common.sh@98 -- # xtrace_disable 00:06:52.031 15:29:53 -- common/autotest_common.sh@10 -- # set +x 00:06:52.031 15:29:53 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:52.031 15:29:53 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:52.289 [2024-04-17 15:29:53.509414] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:52.289 [2024-04-17 15:29:53.509507] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63373 ] 00:06:52.289 [2024-04-17 15:29:53.647650] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.548 [2024-04-17 15:29:53.756606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.806  Copying: 512/512 [B] (average 500 kBps) 00:06:52.806 00:06:52.807 15:29:54 -- dd/posix.sh@93 -- # [[ psaoze8lutcayckrxxyxs6k3typzb5nbvkjjx9nkqc527wk9gqlalym1vssyb4tdiq9roue937ezovpamy8v4bdib1rkde1taz16k9mssn7vs252oknzokpsx4cj8k4l0nbtnoclai7nlnppma4p88w0l8w2ujvuc862eiu8q7cu86qvvryze0lep7bn6nokd6bfjj6awvnukg441wxohjz1inbaf9alam04sxcom426aq7fzqnv3t4pjq0bca65av4bd9qr5p3dtje0spijzaq5gobbvrgho7pgn659uw3ut3agl6jw8llx9ubqve0z2jb75sa0q9r6tyr55t4gxurlep9hb7q2mb4w70d5ern9hu84lcbqbi4i1m1j1q059453b949nl58md714su7icytv4f16tycmwq41y07s2ynupqzyonkmu5q682igu71hpeknswnmtmrrw14excxax8dpr7dnz9wslion8piuz1ruhcyhi06vu6ulqa6ic2d == \p\s\a\o\z\e\8\l\u\t\c\a\y\c\k\r\x\x\y\x\s\6\k\3\t\y\p\z\b\5\n\b\v\k\j\j\x\9\n\k\q\c\5\2\7\w\k\9\g\q\l\a\l\y\m\1\v\s\s\y\b\4\t\d\i\q\9\r\o\u\e\9\3\7\e\z\o\v\p\a\m\y\8\v\4\b\d\i\b\1\r\k\d\e\1\t\a\z\1\6\k\9\m\s\s\n\7\v\s\2\5\2\o\k\n\z\o\k\p\s\x\4\c\j\8\k\4\l\0\n\b\t\n\o\c\l\a\i\7\n\l\n\p\p\m\a\4\p\8\8\w\0\l\8\w\2\u\j\v\u\c\8\6\2\e\i\u\8\q\7\c\u\8\6\q\v\v\r\y\z\e\0\l\e\p\7\b\n\6\n\o\k\d\6\b\f\j\j\6\a\w\v\n\u\k\g\4\4\1\w\x\o\h\j\z\1\i\n\b\a\f\9\a\l\a\m\0\4\s\x\c\o\m\4\2\6\a\q\7\f\z\q\n\v\3\t\4\p\j\q\0\b\c\a\6\5\a\v\4\b\d\9\q\r\5\p\3\d\t\j\e\0\s\p\i\j\z\a\q\5\g\o\b\b\v\r\g\h\o\7\p\g\n\6\5\9\u\w\3\u\t\3\a\g\l\6\j\w\8\l\l\x\9\u\b\q\v\e\0\z\2\j\b\7\5\s\a\0\q\9\r\6\t\y\r\5\5\t\4\g\x\u\r\l\e\p\9\h\b\7\q\2\m\b\4\w\7\0\d\5\e\r\n\9\h\u\8\4\l\c\b\q\b\i\4\i\1\m\1\j\1\q\0\5\9\4\5\3\b\9\4\9\n\l\5\8\m\d\7\1\4\s\u\7\i\c\y\t\v\4\f\1\6\t\y\c\m\w\q\4\1\y\0\7\s\2\y\n\u\p\q\z\y\o\n\k\m\u\5\q\6\8\2\i\g\u\7\1\h\p\e\k\n\s\w\n\m\t\m\r\r\w\1\4\e\x\c\x\a\x\8\d\p\r\7\d\n\z\9\w\s\l\i\o\n\8\p\i\u\z\1\r\u\h\c\y\h\i\0\6\v\u\6\u\l\q\a\6\i\c\2\d ]] 00:06:52.807 15:29:54 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:52.807 15:29:54 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:52.807 [2024-04-17 15:29:54.139540] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:52.807 [2024-04-17 15:29:54.139658] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63386 ] 00:06:53.065 [2024-04-17 15:29:54.278718] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.065 [2024-04-17 15:29:54.374523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.323  Copying: 512/512 [B] (average 500 kBps) 00:06:53.323 00:06:53.323 15:29:54 -- dd/posix.sh@93 -- # [[ psaoze8lutcayckrxxyxs6k3typzb5nbvkjjx9nkqc527wk9gqlalym1vssyb4tdiq9roue937ezovpamy8v4bdib1rkde1taz16k9mssn7vs252oknzokpsx4cj8k4l0nbtnoclai7nlnppma4p88w0l8w2ujvuc862eiu8q7cu86qvvryze0lep7bn6nokd6bfjj6awvnukg441wxohjz1inbaf9alam04sxcom426aq7fzqnv3t4pjq0bca65av4bd9qr5p3dtje0spijzaq5gobbvrgho7pgn659uw3ut3agl6jw8llx9ubqve0z2jb75sa0q9r6tyr55t4gxurlep9hb7q2mb4w70d5ern9hu84lcbqbi4i1m1j1q059453b949nl58md714su7icytv4f16tycmwq41y07s2ynupqzyonkmu5q682igu71hpeknswnmtmrrw14excxax8dpr7dnz9wslion8piuz1ruhcyhi06vu6ulqa6ic2d == \p\s\a\o\z\e\8\l\u\t\c\a\y\c\k\r\x\x\y\x\s\6\k\3\t\y\p\z\b\5\n\b\v\k\j\j\x\9\n\k\q\c\5\2\7\w\k\9\g\q\l\a\l\y\m\1\v\s\s\y\b\4\t\d\i\q\9\r\o\u\e\9\3\7\e\z\o\v\p\a\m\y\8\v\4\b\d\i\b\1\r\k\d\e\1\t\a\z\1\6\k\9\m\s\s\n\7\v\s\2\5\2\o\k\n\z\o\k\p\s\x\4\c\j\8\k\4\l\0\n\b\t\n\o\c\l\a\i\7\n\l\n\p\p\m\a\4\p\8\8\w\0\l\8\w\2\u\j\v\u\c\8\6\2\e\i\u\8\q\7\c\u\8\6\q\v\v\r\y\z\e\0\l\e\p\7\b\n\6\n\o\k\d\6\b\f\j\j\6\a\w\v\n\u\k\g\4\4\1\w\x\o\h\j\z\1\i\n\b\a\f\9\a\l\a\m\0\4\s\x\c\o\m\4\2\6\a\q\7\f\z\q\n\v\3\t\4\p\j\q\0\b\c\a\6\5\a\v\4\b\d\9\q\r\5\p\3\d\t\j\e\0\s\p\i\j\z\a\q\5\g\o\b\b\v\r\g\h\o\7\p\g\n\6\5\9\u\w\3\u\t\3\a\g\l\6\j\w\8\l\l\x\9\u\b\q\v\e\0\z\2\j\b\7\5\s\a\0\q\9\r\6\t\y\r\5\5\t\4\g\x\u\r\l\e\p\9\h\b\7\q\2\m\b\4\w\7\0\d\5\e\r\n\9\h\u\8\4\l\c\b\q\b\i\4\i\1\m\1\j\1\q\0\5\9\4\5\3\b\9\4\9\n\l\5\8\m\d\7\1\4\s\u\7\i\c\y\t\v\4\f\1\6\t\y\c\m\w\q\4\1\y\0\7\s\2\y\n\u\p\q\z\y\o\n\k\m\u\5\q\6\8\2\i\g\u\7\1\h\p\e\k\n\s\w\n\m\t\m\r\r\w\1\4\e\x\c\x\a\x\8\d\p\r\7\d\n\z\9\w\s\l\i\o\n\8\p\i\u\z\1\r\u\h\c\y\h\i\0\6\v\u\6\u\l\q\a\6\i\c\2\d ]] 00:06:53.323 15:29:54 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:53.323 15:29:54 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:53.323 [2024-04-17 15:29:54.759654] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:53.323 [2024-04-17 15:29:54.759786] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63394 ] 00:06:53.581 [2024-04-17 15:29:54.896016] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.581 [2024-04-17 15:29:54.995782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.098  Copying: 512/512 [B] (average 166 kBps) 00:06:54.098 00:06:54.098 15:29:55 -- dd/posix.sh@93 -- # [[ psaoze8lutcayckrxxyxs6k3typzb5nbvkjjx9nkqc527wk9gqlalym1vssyb4tdiq9roue937ezovpamy8v4bdib1rkde1taz16k9mssn7vs252oknzokpsx4cj8k4l0nbtnoclai7nlnppma4p88w0l8w2ujvuc862eiu8q7cu86qvvryze0lep7bn6nokd6bfjj6awvnukg441wxohjz1inbaf9alam04sxcom426aq7fzqnv3t4pjq0bca65av4bd9qr5p3dtje0spijzaq5gobbvrgho7pgn659uw3ut3agl6jw8llx9ubqve0z2jb75sa0q9r6tyr55t4gxurlep9hb7q2mb4w70d5ern9hu84lcbqbi4i1m1j1q059453b949nl58md714su7icytv4f16tycmwq41y07s2ynupqzyonkmu5q682igu71hpeknswnmtmrrw14excxax8dpr7dnz9wslion8piuz1ruhcyhi06vu6ulqa6ic2d == \p\s\a\o\z\e\8\l\u\t\c\a\y\c\k\r\x\x\y\x\s\6\k\3\t\y\p\z\b\5\n\b\v\k\j\j\x\9\n\k\q\c\5\2\7\w\k\9\g\q\l\a\l\y\m\1\v\s\s\y\b\4\t\d\i\q\9\r\o\u\e\9\3\7\e\z\o\v\p\a\m\y\8\v\4\b\d\i\b\1\r\k\d\e\1\t\a\z\1\6\k\9\m\s\s\n\7\v\s\2\5\2\o\k\n\z\o\k\p\s\x\4\c\j\8\k\4\l\0\n\b\t\n\o\c\l\a\i\7\n\l\n\p\p\m\a\4\p\8\8\w\0\l\8\w\2\u\j\v\u\c\8\6\2\e\i\u\8\q\7\c\u\8\6\q\v\v\r\y\z\e\0\l\e\p\7\b\n\6\n\o\k\d\6\b\f\j\j\6\a\w\v\n\u\k\g\4\4\1\w\x\o\h\j\z\1\i\n\b\a\f\9\a\l\a\m\0\4\s\x\c\o\m\4\2\6\a\q\7\f\z\q\n\v\3\t\4\p\j\q\0\b\c\a\6\5\a\v\4\b\d\9\q\r\5\p\3\d\t\j\e\0\s\p\i\j\z\a\q\5\g\o\b\b\v\r\g\h\o\7\p\g\n\6\5\9\u\w\3\u\t\3\a\g\l\6\j\w\8\l\l\x\9\u\b\q\v\e\0\z\2\j\b\7\5\s\a\0\q\9\r\6\t\y\r\5\5\t\4\g\x\u\r\l\e\p\9\h\b\7\q\2\m\b\4\w\7\0\d\5\e\r\n\9\h\u\8\4\l\c\b\q\b\i\4\i\1\m\1\j\1\q\0\5\9\4\5\3\b\9\4\9\n\l\5\8\m\d\7\1\4\s\u\7\i\c\y\t\v\4\f\1\6\t\y\c\m\w\q\4\1\y\0\7\s\2\y\n\u\p\q\z\y\o\n\k\m\u\5\q\6\8\2\i\g\u\7\1\h\p\e\k\n\s\w\n\m\t\m\r\r\w\1\4\e\x\c\x\a\x\8\d\p\r\7\d\n\z\9\w\s\l\i\o\n\8\p\i\u\z\1\r\u\h\c\y\h\i\0\6\v\u\6\u\l\q\a\6\i\c\2\d ]] 00:06:54.098 15:29:55 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:54.098 15:29:55 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:54.098 [2024-04-17 15:29:55.382352] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:54.098 [2024-04-17 15:29:55.382481] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63401 ] 00:06:54.098 [2024-04-17 15:29:55.519814] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.357 [2024-04-17 15:29:55.623216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.616  Copying: 512/512 [B] (average 11 kBps) 00:06:54.616 00:06:54.616 ************************************ 00:06:54.616 END TEST dd_flags_misc_forced_aio 00:06:54.616 ************************************ 00:06:54.616 15:29:56 -- dd/posix.sh@93 -- # [[ psaoze8lutcayckrxxyxs6k3typzb5nbvkjjx9nkqc527wk9gqlalym1vssyb4tdiq9roue937ezovpamy8v4bdib1rkde1taz16k9mssn7vs252oknzokpsx4cj8k4l0nbtnoclai7nlnppma4p88w0l8w2ujvuc862eiu8q7cu86qvvryze0lep7bn6nokd6bfjj6awvnukg441wxohjz1inbaf9alam04sxcom426aq7fzqnv3t4pjq0bca65av4bd9qr5p3dtje0spijzaq5gobbvrgho7pgn659uw3ut3agl6jw8llx9ubqve0z2jb75sa0q9r6tyr55t4gxurlep9hb7q2mb4w70d5ern9hu84lcbqbi4i1m1j1q059453b949nl58md714su7icytv4f16tycmwq41y07s2ynupqzyonkmu5q682igu71hpeknswnmtmrrw14excxax8dpr7dnz9wslion8piuz1ruhcyhi06vu6ulqa6ic2d == \p\s\a\o\z\e\8\l\u\t\c\a\y\c\k\r\x\x\y\x\s\6\k\3\t\y\p\z\b\5\n\b\v\k\j\j\x\9\n\k\q\c\5\2\7\w\k\9\g\q\l\a\l\y\m\1\v\s\s\y\b\4\t\d\i\q\9\r\o\u\e\9\3\7\e\z\o\v\p\a\m\y\8\v\4\b\d\i\b\1\r\k\d\e\1\t\a\z\1\6\k\9\m\s\s\n\7\v\s\2\5\2\o\k\n\z\o\k\p\s\x\4\c\j\8\k\4\l\0\n\b\t\n\o\c\l\a\i\7\n\l\n\p\p\m\a\4\p\8\8\w\0\l\8\w\2\u\j\v\u\c\8\6\2\e\i\u\8\q\7\c\u\8\6\q\v\v\r\y\z\e\0\l\e\p\7\b\n\6\n\o\k\d\6\b\f\j\j\6\a\w\v\n\u\k\g\4\4\1\w\x\o\h\j\z\1\i\n\b\a\f\9\a\l\a\m\0\4\s\x\c\o\m\4\2\6\a\q\7\f\z\q\n\v\3\t\4\p\j\q\0\b\c\a\6\5\a\v\4\b\d\9\q\r\5\p\3\d\t\j\e\0\s\p\i\j\z\a\q\5\g\o\b\b\v\r\g\h\o\7\p\g\n\6\5\9\u\w\3\u\t\3\a\g\l\6\j\w\8\l\l\x\9\u\b\q\v\e\0\z\2\j\b\7\5\s\a\0\q\9\r\6\t\y\r\5\5\t\4\g\x\u\r\l\e\p\9\h\b\7\q\2\m\b\4\w\7\0\d\5\e\r\n\9\h\u\8\4\l\c\b\q\b\i\4\i\1\m\1\j\1\q\0\5\9\4\5\3\b\9\4\9\n\l\5\8\m\d\7\1\4\s\u\7\i\c\y\t\v\4\f\1\6\t\y\c\m\w\q\4\1\y\0\7\s\2\y\n\u\p\q\z\y\o\n\k\m\u\5\q\6\8\2\i\g\u\7\1\h\p\e\k\n\s\w\n\m\t\m\r\r\w\1\4\e\x\c\x\a\x\8\d\p\r\7\d\n\z\9\w\s\l\i\o\n\8\p\i\u\z\1\r\u\h\c\y\h\i\0\6\v\u\6\u\l\q\a\6\i\c\2\d ]] 00:06:54.616 00:06:54.616 real 0m5.116s 00:06:54.616 user 0m2.961s 00:06:54.616 sys 0m1.134s 00:06:54.616 15:29:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:54.616 15:29:56 -- common/autotest_common.sh@10 -- # set +x 00:06:54.616 15:29:56 -- dd/posix.sh@1 -- # cleanup 00:06:54.616 15:29:56 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:54.616 15:29:56 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:54.616 ************************************ 00:06:54.616 END TEST spdk_dd_posix 00:06:54.616 ************************************ 00:06:54.616 00:06:54.616 real 0m25.827s 00:06:54.616 user 0m14.012s 00:06:54.616 sys 0m7.838s 00:06:54.616 15:29:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:54.616 15:29:56 -- common/autotest_common.sh@10 -- # set +x 00:06:54.875 15:29:56 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:54.875 15:29:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:54.875 15:29:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:54.875 15:29:56 -- common/autotest_common.sh@10 -- # set +x 00:06:54.875 ************************************ 00:06:54.875 START TEST spdk_dd_malloc 00:06:54.875 ************************************ 00:06:54.875 15:29:56 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:54.875 * Looking for test storage... 00:06:54.875 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:54.875 15:29:56 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:54.875 15:29:56 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:54.875 15:29:56 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:54.875 15:29:56 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:54.875 15:29:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.875 15:29:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.875 15:29:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.875 15:29:56 -- paths/export.sh@5 -- # export PATH 00:06:54.875 15:29:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.875 15:29:56 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:06:54.875 15:29:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:54.875 15:29:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:54.875 15:29:56 -- common/autotest_common.sh@10 -- # set +x 00:06:55.134 ************************************ 00:06:55.134 START TEST dd_malloc_copy 00:06:55.134 ************************************ 00:06:55.134 15:29:56 -- common/autotest_common.sh@1111 -- # malloc_copy 00:06:55.134 15:29:56 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:06:55.134 15:29:56 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:06:55.134 15:29:56 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:55.134 15:29:56 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:06:55.134 15:29:56 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:06:55.134 15:29:56 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:06:55.134 15:29:56 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:06:55.134 15:29:56 -- dd/malloc.sh@28 -- # gen_conf 00:06:55.134 15:29:56 -- dd/common.sh@31 -- # xtrace_disable 00:06:55.134 15:29:56 -- common/autotest_common.sh@10 -- # set +x 00:06:55.134 [2024-04-17 15:29:56.388214] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:55.134 [2024-04-17 15:29:56.388565] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63484 ] 00:06:55.134 { 00:06:55.134 "subsystems": [ 00:06:55.134 { 00:06:55.134 "subsystem": "bdev", 00:06:55.134 "config": [ 00:06:55.134 { 00:06:55.134 "params": { 00:06:55.134 "block_size": 512, 00:06:55.134 "num_blocks": 1048576, 00:06:55.134 "name": "malloc0" 00:06:55.134 }, 00:06:55.134 "method": "bdev_malloc_create" 00:06:55.134 }, 00:06:55.134 { 00:06:55.134 "params": { 00:06:55.134 "block_size": 512, 00:06:55.134 "num_blocks": 1048576, 00:06:55.134 "name": "malloc1" 00:06:55.134 }, 00:06:55.134 "method": "bdev_malloc_create" 00:06:55.134 }, 00:06:55.134 { 00:06:55.134 "method": "bdev_wait_for_examine" 00:06:55.134 } 00:06:55.134 ] 00:06:55.134 } 00:06:55.134 ] 00:06:55.134 } 00:06:55.134 [2024-04-17 15:29:56.529170] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.392 [2024-04-17 15:29:56.630871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.681  Copying: 211/512 [MB] (211 MBps) Copying: 382/512 [MB] (171 MBps) Copying: 512/512 [MB] (average 188 MBps) 00:06:59.681 00:06:59.681 15:30:00 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:06:59.681 15:30:00 -- dd/malloc.sh@33 -- # gen_conf 00:06:59.681 15:30:00 -- dd/common.sh@31 -- # xtrace_disable 00:06:59.681 15:30:00 -- common/autotest_common.sh@10 -- # set +x 00:06:59.681 [2024-04-17 15:30:00.752346] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:06:59.681 [2024-04-17 15:30:00.752453] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63543 ] 00:06:59.681 { 00:06:59.681 "subsystems": [ 00:06:59.681 { 00:06:59.681 "subsystem": "bdev", 00:06:59.681 "config": [ 00:06:59.681 { 00:06:59.681 "params": { 00:06:59.681 "block_size": 512, 00:06:59.681 "num_blocks": 1048576, 00:06:59.681 "name": "malloc0" 00:06:59.681 }, 00:06:59.681 "method": "bdev_malloc_create" 00:06:59.681 }, 00:06:59.681 { 00:06:59.681 "params": { 00:06:59.681 "block_size": 512, 00:06:59.681 "num_blocks": 1048576, 00:06:59.681 "name": "malloc1" 00:06:59.681 }, 00:06:59.681 "method": "bdev_malloc_create" 00:06:59.681 }, 00:06:59.681 { 00:06:59.681 "method": "bdev_wait_for_examine" 00:06:59.681 } 00:06:59.681 ] 00:06:59.681 } 00:06:59.681 ] 00:06:59.681 } 00:06:59.681 [2024-04-17 15:30:00.893649] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.681 [2024-04-17 15:30:01.040327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.760  Copying: 188/512 [MB] (188 MBps) Copying: 376/512 [MB] (187 MBps) Copying: 512/512 [MB] (average 187 MBps) 00:07:03.760 00:07:03.760 ************************************ 00:07:03.760 END TEST dd_malloc_copy 00:07:03.760 ************************************ 00:07:03.760 00:07:03.760 real 0m8.677s 00:07:03.760 user 0m7.487s 00:07:03.760 sys 0m1.015s 00:07:03.760 15:30:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:03.760 15:30:05 -- common/autotest_common.sh@10 -- # set +x 00:07:03.760 ************************************ 00:07:03.760 END TEST spdk_dd_malloc 00:07:03.760 ************************************ 00:07:03.760 00:07:03.760 real 0m8.886s 00:07:03.760 user 0m7.568s 00:07:03.760 sys 0m1.133s 00:07:03.760 15:30:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:03.760 15:30:05 -- common/autotest_common.sh@10 -- # set +x 00:07:03.760 15:30:05 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:03.760 15:30:05 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:03.760 15:30:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:03.760 15:30:05 -- common/autotest_common.sh@10 -- # set +x 00:07:03.760 ************************************ 00:07:03.760 START TEST spdk_dd_bdev_to_bdev 00:07:03.760 ************************************ 00:07:03.760 15:30:05 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:04.019 * Looking for test storage... 00:07:04.019 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:04.019 15:30:05 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:04.019 15:30:05 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:04.019 15:30:05 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:04.019 15:30:05 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:04.019 15:30:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.019 15:30:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.019 15:30:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.019 15:30:05 -- paths/export.sh@5 -- # export PATH 00:07:04.019 15:30:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.019 15:30:05 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:07:04.019 15:30:05 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:07:04.019 15:30:05 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:07:04.019 15:30:05 -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:07:04.019 15:30:05 -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:07:04.019 15:30:05 -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:07:04.019 15:30:05 -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:07:04.019 15:30:05 -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:07:04.019 15:30:05 -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:07:04.019 15:30:05 -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:07:04.019 15:30:05 -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:04.019 15:30:05 -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:04.019 15:30:05 -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:07:04.019 15:30:05 -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:07:04.019 15:30:05 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:04.019 15:30:05 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:04.019 15:30:05 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:07:04.019 15:30:05 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:07:04.019 15:30:05 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:04.019 15:30:05 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:04.019 15:30:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:04.019 15:30:05 -- common/autotest_common.sh@10 -- # set +x 00:07:04.019 ************************************ 00:07:04.019 START TEST dd_inflate_file 00:07:04.019 ************************************ 00:07:04.019 15:30:05 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:04.019 [2024-04-17 15:30:05.416567] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:07:04.019 [2024-04-17 15:30:05.416658] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63662 ] 00:07:04.278 [2024-04-17 15:30:05.556112] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.278 [2024-04-17 15:30:05.663149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.795  Copying: 64/64 [MB] (average 1454 MBps) 00:07:04.795 00:07:04.795 00:07:04.795 real 0m0.853s 00:07:04.795 user 0m0.547s 00:07:04.795 sys 0m0.406s 00:07:04.795 ************************************ 00:07:04.795 END TEST dd_inflate_file 00:07:04.795 ************************************ 00:07:04.795 15:30:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:04.795 15:30:06 -- common/autotest_common.sh@10 -- # set +x 00:07:05.053 15:30:06 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:07:05.053 15:30:06 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:07:05.053 15:30:06 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:05.053 15:30:06 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:07:05.053 15:30:06 -- dd/common.sh@31 -- # xtrace_disable 00:07:05.053 15:30:06 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:05.053 15:30:06 -- common/autotest_common.sh@10 -- # set +x 00:07:05.053 15:30:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:05.054 15:30:06 -- common/autotest_common.sh@10 -- # set +x 00:07:05.054 { 00:07:05.054 "subsystems": [ 00:07:05.054 { 00:07:05.054 "subsystem": "bdev", 00:07:05.054 "config": [ 00:07:05.054 { 00:07:05.054 "params": { 00:07:05.054 "trtype": "pcie", 00:07:05.054 "traddr": "0000:00:10.0", 00:07:05.054 "name": "Nvme0" 00:07:05.054 }, 00:07:05.054 "method": "bdev_nvme_attach_controller" 00:07:05.054 }, 00:07:05.054 { 00:07:05.054 "params": { 00:07:05.054 "trtype": "pcie", 00:07:05.054 "traddr": "0000:00:11.0", 00:07:05.054 "name": "Nvme1" 00:07:05.054 }, 00:07:05.054 "method": "bdev_nvme_attach_controller" 00:07:05.054 }, 00:07:05.054 { 00:07:05.054 "method": "bdev_wait_for_examine" 00:07:05.054 } 00:07:05.054 ] 00:07:05.054 } 00:07:05.054 ] 00:07:05.054 } 00:07:05.054 ************************************ 00:07:05.054 START TEST dd_copy_to_out_bdev 00:07:05.054 ************************************ 00:07:05.054 15:30:06 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:05.054 [2024-04-17 15:30:06.409813] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:07:05.054 [2024-04-17 15:30:06.410271] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63707 ] 00:07:05.312 [2024-04-17 15:30:06.550123] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.312 [2024-04-17 15:30:06.688523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.258  Copying: 57/64 [MB] (57 MBps) Copying: 64/64 [MB] (average 57 MBps) 00:07:07.258 00:07:07.258 ************************************ 00:07:07.258 END TEST dd_copy_to_out_bdev 00:07:07.258 ************************************ 00:07:07.258 00:07:07.258 real 0m2.153s 00:07:07.258 user 0m1.792s 00:07:07.258 sys 0m1.600s 00:07:07.258 15:30:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:07.258 15:30:08 -- common/autotest_common.sh@10 -- # set +x 00:07:07.258 15:30:08 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:07:07.258 15:30:08 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:07:07.258 15:30:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:07.258 15:30:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:07.258 15:30:08 -- common/autotest_common.sh@10 -- # set +x 00:07:07.258 ************************************ 00:07:07.258 START TEST dd_offset_magic 00:07:07.258 ************************************ 00:07:07.258 15:30:08 -- common/autotest_common.sh@1111 -- # offset_magic 00:07:07.258 15:30:08 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:07:07.258 15:30:08 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:07:07.258 15:30:08 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:07:07.258 15:30:08 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:07.258 15:30:08 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:07:07.258 15:30:08 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:07.258 15:30:08 -- dd/common.sh@31 -- # xtrace_disable 00:07:07.258 15:30:08 -- common/autotest_common.sh@10 -- # set +x 00:07:07.258 [2024-04-17 15:30:08.677009] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:07:07.258 [2024-04-17 15:30:08.677152] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63754 ] 00:07:07.258 { 00:07:07.258 "subsystems": [ 00:07:07.258 { 00:07:07.258 "subsystem": "bdev", 00:07:07.258 "config": [ 00:07:07.258 { 00:07:07.258 "params": { 00:07:07.258 "trtype": "pcie", 00:07:07.258 "traddr": "0000:00:10.0", 00:07:07.258 "name": "Nvme0" 00:07:07.258 }, 00:07:07.258 "method": "bdev_nvme_attach_controller" 00:07:07.258 }, 00:07:07.258 { 00:07:07.258 "params": { 00:07:07.258 "trtype": "pcie", 00:07:07.258 "traddr": "0000:00:11.0", 00:07:07.258 "name": "Nvme1" 00:07:07.258 }, 00:07:07.258 "method": "bdev_nvme_attach_controller" 00:07:07.258 }, 00:07:07.258 { 00:07:07.258 "method": "bdev_wait_for_examine" 00:07:07.258 } 00:07:07.258 ] 00:07:07.258 } 00:07:07.258 ] 00:07:07.258 } 00:07:07.517 [2024-04-17 15:30:08.811767] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.777 [2024-04-17 15:30:08.970461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.605  Copying: 65/65 [MB] (average 802 MBps) 00:07:08.605 00:07:08.605 15:30:09 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:07:08.605 15:30:09 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:08.605 15:30:09 -- dd/common.sh@31 -- # xtrace_disable 00:07:08.605 15:30:09 -- common/autotest_common.sh@10 -- # set +x 00:07:08.605 [2024-04-17 15:30:09.829344] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:07:08.605 [2024-04-17 15:30:09.829469] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63774 ] 00:07:08.605 { 00:07:08.605 "subsystems": [ 00:07:08.605 { 00:07:08.605 "subsystem": "bdev", 00:07:08.605 "config": [ 00:07:08.605 { 00:07:08.605 "params": { 00:07:08.605 "trtype": "pcie", 00:07:08.605 "traddr": "0000:00:10.0", 00:07:08.605 "name": "Nvme0" 00:07:08.605 }, 00:07:08.605 "method": "bdev_nvme_attach_controller" 00:07:08.605 }, 00:07:08.605 { 00:07:08.605 "params": { 00:07:08.605 "trtype": "pcie", 00:07:08.605 "traddr": "0000:00:11.0", 00:07:08.605 "name": "Nvme1" 00:07:08.605 }, 00:07:08.605 "method": "bdev_nvme_attach_controller" 00:07:08.605 }, 00:07:08.605 { 00:07:08.605 "method": "bdev_wait_for_examine" 00:07:08.605 } 00:07:08.605 ] 00:07:08.605 } 00:07:08.605 ] 00:07:08.605 } 00:07:08.605 [2024-04-17 15:30:09.971470] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.864 [2024-04-17 15:30:10.103665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.382  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:09.382 00:07:09.382 15:30:10 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:09.382 15:30:10 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:09.382 15:30:10 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:09.382 15:30:10 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:07:09.382 15:30:10 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:09.382 15:30:10 -- dd/common.sh@31 -- # xtrace_disable 00:07:09.382 15:30:10 -- common/autotest_common.sh@10 -- # set +x 00:07:09.382 [2024-04-17 15:30:10.776804] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:07:09.382 [2024-04-17 15:30:10.776904] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63798 ] 00:07:09.382 { 00:07:09.382 "subsystems": [ 00:07:09.382 { 00:07:09.382 "subsystem": "bdev", 00:07:09.382 "config": [ 00:07:09.382 { 00:07:09.382 "params": { 00:07:09.382 "trtype": "pcie", 00:07:09.382 "traddr": "0000:00:10.0", 00:07:09.382 "name": "Nvme0" 00:07:09.383 }, 00:07:09.383 "method": "bdev_nvme_attach_controller" 00:07:09.383 }, 00:07:09.383 { 00:07:09.383 "params": { 00:07:09.383 "trtype": "pcie", 00:07:09.383 "traddr": "0000:00:11.0", 00:07:09.383 "name": "Nvme1" 00:07:09.383 }, 00:07:09.383 "method": "bdev_nvme_attach_controller" 00:07:09.383 }, 00:07:09.383 { 00:07:09.383 "method": "bdev_wait_for_examine" 00:07:09.383 } 00:07:09.383 ] 00:07:09.383 } 00:07:09.383 ] 00:07:09.383 } 00:07:09.642 [2024-04-17 15:30:10.917545] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.642 [2024-04-17 15:30:11.061170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.470  Copying: 65/65 [MB] (average 890 MBps) 00:07:10.470 00:07:10.470 15:30:11 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:07:10.470 15:30:11 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:10.470 15:30:11 -- dd/common.sh@31 -- # xtrace_disable 00:07:10.470 15:30:11 -- common/autotest_common.sh@10 -- # set +x 00:07:10.470 [2024-04-17 15:30:11.823838] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:07:10.470 [2024-04-17 15:30:11.823942] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63818 ] 00:07:10.470 { 00:07:10.470 "subsystems": [ 00:07:10.470 { 00:07:10.470 "subsystem": "bdev", 00:07:10.470 "config": [ 00:07:10.470 { 00:07:10.470 "params": { 00:07:10.470 "trtype": "pcie", 00:07:10.470 "traddr": "0000:00:10.0", 00:07:10.470 "name": "Nvme0" 00:07:10.470 }, 00:07:10.470 "method": "bdev_nvme_attach_controller" 00:07:10.470 }, 00:07:10.470 { 00:07:10.470 "params": { 00:07:10.470 "trtype": "pcie", 00:07:10.470 "traddr": "0000:00:11.0", 00:07:10.470 "name": "Nvme1" 00:07:10.470 }, 00:07:10.470 "method": "bdev_nvme_attach_controller" 00:07:10.470 }, 00:07:10.470 { 00:07:10.470 "method": "bdev_wait_for_examine" 00:07:10.470 } 00:07:10.470 ] 00:07:10.470 } 00:07:10.470 ] 00:07:10.470 } 00:07:10.729 [2024-04-17 15:30:11.963296] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.729 [2024-04-17 15:30:12.120253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.254  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:11.254 00:07:11.545 15:30:12 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:11.545 15:30:12 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:11.545 00:07:11.545 real 0m4.070s 00:07:11.545 user 0m2.986s 00:07:11.545 sys 0m1.260s 00:07:11.545 15:30:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:11.545 15:30:12 -- common/autotest_common.sh@10 -- # set +x 00:07:11.545 ************************************ 00:07:11.545 END TEST dd_offset_magic 00:07:11.545 ************************************ 00:07:11.545 15:30:12 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:07:11.545 15:30:12 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:07:11.545 15:30:12 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:11.545 15:30:12 -- dd/common.sh@11 -- # local nvme_ref= 00:07:11.545 15:30:12 -- dd/common.sh@12 -- # local size=4194330 00:07:11.545 15:30:12 -- dd/common.sh@14 -- # local bs=1048576 00:07:11.545 15:30:12 -- dd/common.sh@15 -- # local count=5 00:07:11.545 15:30:12 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:07:11.545 15:30:12 -- dd/common.sh@18 -- # gen_conf 00:07:11.545 15:30:12 -- dd/common.sh@31 -- # xtrace_disable 00:07:11.545 15:30:12 -- common/autotest_common.sh@10 -- # set +x 00:07:11.545 { 00:07:11.545 "subsystems": [ 00:07:11.545 { 00:07:11.545 "subsystem": "bdev", 00:07:11.545 "config": [ 00:07:11.545 { 00:07:11.545 "params": { 00:07:11.545 "trtype": "pcie", 00:07:11.545 "traddr": "0000:00:10.0", 00:07:11.545 "name": "Nvme0" 00:07:11.545 }, 00:07:11.545 "method": "bdev_nvme_attach_controller" 00:07:11.545 }, 00:07:11.545 { 00:07:11.545 "params": { 00:07:11.545 "trtype": "pcie", 00:07:11.545 "traddr": "0000:00:11.0", 00:07:11.545 "name": "Nvme1" 00:07:11.545 }, 00:07:11.545 "method": "bdev_nvme_attach_controller" 00:07:11.545 }, 00:07:11.545 { 00:07:11.545 "method": "bdev_wait_for_examine" 00:07:11.545 } 00:07:11.545 ] 00:07:11.545 } 00:07:11.545 ] 00:07:11.545 } 00:07:11.545 [2024-04-17 15:30:12.798634] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:07:11.545 [2024-04-17 15:30:12.798735] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63855 ] 00:07:11.545 [2024-04-17 15:30:12.931249] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.804 [2024-04-17 15:30:13.076182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.323  Copying: 5120/5120 [kB] (average 1000 MBps) 00:07:12.323 00:07:12.323 15:30:13 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:07:12.323 15:30:13 -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:07:12.323 15:30:13 -- dd/common.sh@11 -- # local nvme_ref= 00:07:12.323 15:30:13 -- dd/common.sh@12 -- # local size=4194330 00:07:12.323 15:30:13 -- dd/common.sh@14 -- # local bs=1048576 00:07:12.323 15:30:13 -- dd/common.sh@15 -- # local count=5 00:07:12.323 15:30:13 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:07:12.323 15:30:13 -- dd/common.sh@18 -- # gen_conf 00:07:12.323 15:30:13 -- dd/common.sh@31 -- # xtrace_disable 00:07:12.323 15:30:13 -- common/autotest_common.sh@10 -- # set +x 00:07:12.323 [2024-04-17 15:30:13.710467] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:07:12.323 [2024-04-17 15:30:13.710810] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63876 ] 00:07:12.323 { 00:07:12.323 "subsystems": [ 00:07:12.323 { 00:07:12.323 "subsystem": "bdev", 00:07:12.323 "config": [ 00:07:12.323 { 00:07:12.323 "params": { 00:07:12.323 "trtype": "pcie", 00:07:12.323 "traddr": "0000:00:10.0", 00:07:12.323 "name": "Nvme0" 00:07:12.323 }, 00:07:12.323 "method": "bdev_nvme_attach_controller" 00:07:12.323 }, 00:07:12.323 { 00:07:12.323 "params": { 00:07:12.323 "trtype": "pcie", 00:07:12.323 "traddr": "0000:00:11.0", 00:07:12.323 "name": "Nvme1" 00:07:12.323 }, 00:07:12.323 "method": "bdev_nvme_attach_controller" 00:07:12.323 }, 00:07:12.323 { 00:07:12.323 "method": "bdev_wait_for_examine" 00:07:12.323 } 00:07:12.323 ] 00:07:12.323 } 00:07:12.323 ] 00:07:12.323 } 00:07:12.582 [2024-04-17 15:30:13.852882] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.582 [2024-04-17 15:30:13.999725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.407  Copying: 5120/5120 [kB] (average 833 MBps) 00:07:13.407 00:07:13.407 15:30:14 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:07:13.407 ************************************ 00:07:13.407 END TEST spdk_dd_bdev_to_bdev 00:07:13.407 ************************************ 00:07:13.407 00:07:13.407 real 0m9.423s 00:07:13.407 user 0m6.882s 00:07:13.407 sys 0m4.237s 00:07:13.407 15:30:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:13.408 15:30:14 -- common/autotest_common.sh@10 -- # set +x 00:07:13.408 15:30:14 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:07:13.408 15:30:14 -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:13.408 15:30:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:13.408 15:30:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:13.408 15:30:14 -- common/autotest_common.sh@10 -- # set +x 00:07:13.408 ************************************ 00:07:13.408 START TEST spdk_dd_uring 00:07:13.408 ************************************ 00:07:13.408 15:30:14 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:13.408 * Looking for test storage... 00:07:13.408 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:13.408 15:30:14 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:13.408 15:30:14 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:13.408 15:30:14 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:13.408 15:30:14 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:13.408 15:30:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.408 15:30:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.408 15:30:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.408 15:30:14 -- paths/export.sh@5 -- # export PATH 00:07:13.408 15:30:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.408 15:30:14 -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:07:13.408 15:30:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:13.408 15:30:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:13.408 15:30:14 -- common/autotest_common.sh@10 -- # set +x 00:07:13.667 ************************************ 00:07:13.667 START TEST dd_uring_copy 00:07:13.667 ************************************ 00:07:13.667 15:30:14 -- common/autotest_common.sh@1111 -- # uring_zram_copy 00:07:13.667 15:30:14 -- dd/uring.sh@15 -- # local zram_dev_id 00:07:13.667 15:30:14 -- dd/uring.sh@16 -- # local magic 00:07:13.667 15:30:14 -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:07:13.667 15:30:14 -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:13.667 15:30:14 -- dd/uring.sh@19 -- # local verify_magic 00:07:13.667 15:30:14 -- dd/uring.sh@21 -- # init_zram 00:07:13.667 15:30:14 -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:07:13.667 15:30:14 -- dd/common.sh@164 -- # return 00:07:13.667 15:30:14 -- dd/uring.sh@22 -- # create_zram_dev 00:07:13.667 15:30:14 -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:07:13.667 15:30:14 -- dd/uring.sh@22 -- # zram_dev_id=1 00:07:13.667 15:30:14 -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:07:13.667 15:30:14 -- dd/common.sh@181 -- # local id=1 00:07:13.667 15:30:14 -- dd/common.sh@182 -- # local size=512M 00:07:13.667 15:30:14 -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:07:13.667 15:30:14 -- dd/common.sh@186 -- # echo 512M 00:07:13.667 15:30:14 -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:07:13.667 15:30:14 -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:07:13.667 15:30:14 -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:07:13.667 15:30:14 -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:07:13.667 15:30:14 -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:13.667 15:30:14 -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:07:13.667 15:30:14 -- dd/uring.sh@41 -- # gen_bytes 1024 00:07:13.667 15:30:14 -- dd/common.sh@98 -- # xtrace_disable 00:07:13.667 15:30:14 -- common/autotest_common.sh@10 -- # set +x 00:07:13.668 15:30:14 -- dd/uring.sh@41 -- # magic=wxst05y2e0kcy8yqh6f2g81mqc1tbrucz6xu3h8k9z5g3hjnprlq5f9tio44i1ce7u3zhjwfrnnwb1vrdbxpjsqa0mqe5x6h123v9nagvk0b5zeixu0xjvi5wx0u7x4zamhyodf3odcdpxjxhnzy3zh4sdpmjmsxpox6y0bcapqvpgoxuyrog9c0ef983c1fjvqfghslvzhrz2wrjh1hqi9t6nfcwbrzdgk0sckt1qwm3qyl6kz38asi5y5gcnc3eha33du67n3fn2jr9mp3d7d20qe8j4q7esebcprd6wdftsxp5jdqe4oyfuwjops1hg87w51afhqutywuo631i1j656w6ernux74ywpwq3ihndz8rgg4yjp02uzm892iowsca5lj6wyxwtxeay5gdbue65guqko3e2qc36prse8c493xesobn1lzjmtab8f6vpg8zhy7x3bl6689vhdyeopqkfftmsehjg7leeqa64igd441jghosou9p82ulhmvgj3gwrtwvfm6w24tldgtls3a9m6ykgu152fz8yc68rfqu06lwilf2dj4dknuiqpf3jce1j2t4h3gd5cdr6e6wgdacgxx54nm7udsghfamca8ml3nnq8anpesuxcr7qu7b1tl78z7tbbn7fwmgdhc92ldzonkewx3kpju89b8hibhfaf8ou1v554lkmsm0ut11v8nqkuee6p70fbip8m9p509q8g1gilz3dgiww3xe5yfyejgml7sw7wbqkdv2nmts8cethcsza72jbbc3zbljxc4z88c1pk5s27lr0ycjkjzahnqcd3xs8l4dge05h29p3uofqpb174ab0ho72sg7dwe16t5d5vabl8g9469wspkwy0b3k5m63vr3b8bd3v0g2obakq3841s4u7es3nmjy0k0rv8pm9y6c9cm5npf0cfcf4vbdm70mhjb6edwv16nuhixzrvc1zmudq94gu2vqrw9c1xcx7qjrrul6wnipj8l0h19hdrb8arczr9d7xqz 00:07:13.668 15:30:14 -- dd/uring.sh@42 -- # echo wxst05y2e0kcy8yqh6f2g81mqc1tbrucz6xu3h8k9z5g3hjnprlq5f9tio44i1ce7u3zhjwfrnnwb1vrdbxpjsqa0mqe5x6h123v9nagvk0b5zeixu0xjvi5wx0u7x4zamhyodf3odcdpxjxhnzy3zh4sdpmjmsxpox6y0bcapqvpgoxuyrog9c0ef983c1fjvqfghslvzhrz2wrjh1hqi9t6nfcwbrzdgk0sckt1qwm3qyl6kz38asi5y5gcnc3eha33du67n3fn2jr9mp3d7d20qe8j4q7esebcprd6wdftsxp5jdqe4oyfuwjops1hg87w51afhqutywuo631i1j656w6ernux74ywpwq3ihndz8rgg4yjp02uzm892iowsca5lj6wyxwtxeay5gdbue65guqko3e2qc36prse8c493xesobn1lzjmtab8f6vpg8zhy7x3bl6689vhdyeopqkfftmsehjg7leeqa64igd441jghosou9p82ulhmvgj3gwrtwvfm6w24tldgtls3a9m6ykgu152fz8yc68rfqu06lwilf2dj4dknuiqpf3jce1j2t4h3gd5cdr6e6wgdacgxx54nm7udsghfamca8ml3nnq8anpesuxcr7qu7b1tl78z7tbbn7fwmgdhc92ldzonkewx3kpju89b8hibhfaf8ou1v554lkmsm0ut11v8nqkuee6p70fbip8m9p509q8g1gilz3dgiww3xe5yfyejgml7sw7wbqkdv2nmts8cethcsza72jbbc3zbljxc4z88c1pk5s27lr0ycjkjzahnqcd3xs8l4dge05h29p3uofqpb174ab0ho72sg7dwe16t5d5vabl8g9469wspkwy0b3k5m63vr3b8bd3v0g2obakq3841s4u7es3nmjy0k0rv8pm9y6c9cm5npf0cfcf4vbdm70mhjb6edwv16nuhixzrvc1zmudq94gu2vqrw9c1xcx7qjrrul6wnipj8l0h19hdrb8arczr9d7xqz 00:07:13.668 15:30:14 -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:07:13.668 [2024-04-17 15:30:14.984183] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:07:13.668 [2024-04-17 15:30:14.984467] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63958 ] 00:07:13.927 [2024-04-17 15:30:15.123921] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.927 [2024-04-17 15:30:15.239835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.433  Copying: 511/511 [MB] (average 869 MBps) 00:07:15.433 00:07:15.433 15:30:16 -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:07:15.433 15:30:16 -- dd/uring.sh@54 -- # gen_conf 00:07:15.433 15:30:16 -- dd/common.sh@31 -- # xtrace_disable 00:07:15.433 15:30:16 -- common/autotest_common.sh@10 -- # set +x 00:07:15.433 [2024-04-17 15:30:16.831940] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:07:15.433 [2024-04-17 15:30:16.832038] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63985 ] 00:07:15.433 { 00:07:15.433 "subsystems": [ 00:07:15.433 { 00:07:15.433 "subsystem": "bdev", 00:07:15.433 "config": [ 00:07:15.433 { 00:07:15.433 "params": { 00:07:15.433 "block_size": 512, 00:07:15.433 "num_blocks": 1048576, 00:07:15.433 "name": "malloc0" 00:07:15.433 }, 00:07:15.433 "method": "bdev_malloc_create" 00:07:15.433 }, 00:07:15.433 { 00:07:15.433 "params": { 00:07:15.433 "filename": "/dev/zram1", 00:07:15.433 "name": "uring0" 00:07:15.433 }, 00:07:15.433 "method": "bdev_uring_create" 00:07:15.433 }, 00:07:15.433 { 00:07:15.433 "method": "bdev_wait_for_examine" 00:07:15.433 } 00:07:15.433 ] 00:07:15.433 } 00:07:15.433 ] 00:07:15.433 } 00:07:15.693 [2024-04-17 15:30:16.967209] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.693 [2024-04-17 15:30:17.108735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.835  Copying: 239/512 [MB] (239 MBps) Copying: 466/512 [MB] (227 MBps) Copying: 512/512 [MB] (average 233 MBps) 00:07:18.835 00:07:18.835 15:30:20 -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:07:18.835 15:30:20 -- dd/uring.sh@60 -- # gen_conf 00:07:18.835 15:30:20 -- dd/common.sh@31 -- # xtrace_disable 00:07:18.835 15:30:20 -- common/autotest_common.sh@10 -- # set +x 00:07:19.094 [2024-04-17 15:30:20.313676] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:07:19.094 [2024-04-17 15:30:20.313826] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64035 ] 00:07:19.094 { 00:07:19.094 "subsystems": [ 00:07:19.094 { 00:07:19.094 "subsystem": "bdev", 00:07:19.094 "config": [ 00:07:19.094 { 00:07:19.094 "params": { 00:07:19.094 "block_size": 512, 00:07:19.094 "num_blocks": 1048576, 00:07:19.094 "name": "malloc0" 00:07:19.094 }, 00:07:19.094 "method": "bdev_malloc_create" 00:07:19.094 }, 00:07:19.094 { 00:07:19.094 "params": { 00:07:19.094 "filename": "/dev/zram1", 00:07:19.094 "name": "uring0" 00:07:19.094 }, 00:07:19.094 "method": "bdev_uring_create" 00:07:19.094 }, 00:07:19.094 { 00:07:19.094 "method": "bdev_wait_for_examine" 00:07:19.094 } 00:07:19.094 ] 00:07:19.094 } 00:07:19.094 ] 00:07:19.094 } 00:07:19.094 [2024-04-17 15:30:20.455980] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.353 [2024-04-17 15:30:20.590412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.167  Copying: 187/512 [MB] (187 MBps) Copying: 361/512 [MB] (174 MBps) Copying: 512/512 [MB] (average 189 MBps) 00:07:23.167 00:07:23.167 15:30:24 -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:07:23.168 15:30:24 -- dd/uring.sh@66 -- # [[ wxst05y2e0kcy8yqh6f2g81mqc1tbrucz6xu3h8k9z5g3hjnprlq5f9tio44i1ce7u3zhjwfrnnwb1vrdbxpjsqa0mqe5x6h123v9nagvk0b5zeixu0xjvi5wx0u7x4zamhyodf3odcdpxjxhnzy3zh4sdpmjmsxpox6y0bcapqvpgoxuyrog9c0ef983c1fjvqfghslvzhrz2wrjh1hqi9t6nfcwbrzdgk0sckt1qwm3qyl6kz38asi5y5gcnc3eha33du67n3fn2jr9mp3d7d20qe8j4q7esebcprd6wdftsxp5jdqe4oyfuwjops1hg87w51afhqutywuo631i1j656w6ernux74ywpwq3ihndz8rgg4yjp02uzm892iowsca5lj6wyxwtxeay5gdbue65guqko3e2qc36prse8c493xesobn1lzjmtab8f6vpg8zhy7x3bl6689vhdyeopqkfftmsehjg7leeqa64igd441jghosou9p82ulhmvgj3gwrtwvfm6w24tldgtls3a9m6ykgu152fz8yc68rfqu06lwilf2dj4dknuiqpf3jce1j2t4h3gd5cdr6e6wgdacgxx54nm7udsghfamca8ml3nnq8anpesuxcr7qu7b1tl78z7tbbn7fwmgdhc92ldzonkewx3kpju89b8hibhfaf8ou1v554lkmsm0ut11v8nqkuee6p70fbip8m9p509q8g1gilz3dgiww3xe5yfyejgml7sw7wbqkdv2nmts8cethcsza72jbbc3zbljxc4z88c1pk5s27lr0ycjkjzahnqcd3xs8l4dge05h29p3uofqpb174ab0ho72sg7dwe16t5d5vabl8g9469wspkwy0b3k5m63vr3b8bd3v0g2obakq3841s4u7es3nmjy0k0rv8pm9y6c9cm5npf0cfcf4vbdm70mhjb6edwv16nuhixzrvc1zmudq94gu2vqrw9c1xcx7qjrrul6wnipj8l0h19hdrb8arczr9d7xqz == \w\x\s\t\0\5\y\2\e\0\k\c\y\8\y\q\h\6\f\2\g\8\1\m\q\c\1\t\b\r\u\c\z\6\x\u\3\h\8\k\9\z\5\g\3\h\j\n\p\r\l\q\5\f\9\t\i\o\4\4\i\1\c\e\7\u\3\z\h\j\w\f\r\n\n\w\b\1\v\r\d\b\x\p\j\s\q\a\0\m\q\e\5\x\6\h\1\2\3\v\9\n\a\g\v\k\0\b\5\z\e\i\x\u\0\x\j\v\i\5\w\x\0\u\7\x\4\z\a\m\h\y\o\d\f\3\o\d\c\d\p\x\j\x\h\n\z\y\3\z\h\4\s\d\p\m\j\m\s\x\p\o\x\6\y\0\b\c\a\p\q\v\p\g\o\x\u\y\r\o\g\9\c\0\e\f\9\8\3\c\1\f\j\v\q\f\g\h\s\l\v\z\h\r\z\2\w\r\j\h\1\h\q\i\9\t\6\n\f\c\w\b\r\z\d\g\k\0\s\c\k\t\1\q\w\m\3\q\y\l\6\k\z\3\8\a\s\i\5\y\5\g\c\n\c\3\e\h\a\3\3\d\u\6\7\n\3\f\n\2\j\r\9\m\p\3\d\7\d\2\0\q\e\8\j\4\q\7\e\s\e\b\c\p\r\d\6\w\d\f\t\s\x\p\5\j\d\q\e\4\o\y\f\u\w\j\o\p\s\1\h\g\8\7\w\5\1\a\f\h\q\u\t\y\w\u\o\6\3\1\i\1\j\6\5\6\w\6\e\r\n\u\x\7\4\y\w\p\w\q\3\i\h\n\d\z\8\r\g\g\4\y\j\p\0\2\u\z\m\8\9\2\i\o\w\s\c\a\5\l\j\6\w\y\x\w\t\x\e\a\y\5\g\d\b\u\e\6\5\g\u\q\k\o\3\e\2\q\c\3\6\p\r\s\e\8\c\4\9\3\x\e\s\o\b\n\1\l\z\j\m\t\a\b\8\f\6\v\p\g\8\z\h\y\7\x\3\b\l\6\6\8\9\v\h\d\y\e\o\p\q\k\f\f\t\m\s\e\h\j\g\7\l\e\e\q\a\6\4\i\g\d\4\4\1\j\g\h\o\s\o\u\9\p\8\2\u\l\h\m\v\g\j\3\g\w\r\t\w\v\f\m\6\w\2\4\t\l\d\g\t\l\s\3\a\9\m\6\y\k\g\u\1\5\2\f\z\8\y\c\6\8\r\f\q\u\0\6\l\w\i\l\f\2\d\j\4\d\k\n\u\i\q\p\f\3\j\c\e\1\j\2\t\4\h\3\g\d\5\c\d\r\6\e\6\w\g\d\a\c\g\x\x\5\4\n\m\7\u\d\s\g\h\f\a\m\c\a\8\m\l\3\n\n\q\8\a\n\p\e\s\u\x\c\r\7\q\u\7\b\1\t\l\7\8\z\7\t\b\b\n\7\f\w\m\g\d\h\c\9\2\l\d\z\o\n\k\e\w\x\3\k\p\j\u\8\9\b\8\h\i\b\h\f\a\f\8\o\u\1\v\5\5\4\l\k\m\s\m\0\u\t\1\1\v\8\n\q\k\u\e\e\6\p\7\0\f\b\i\p\8\m\9\p\5\0\9\q\8\g\1\g\i\l\z\3\d\g\i\w\w\3\x\e\5\y\f\y\e\j\g\m\l\7\s\w\7\w\b\q\k\d\v\2\n\m\t\s\8\c\e\t\h\c\s\z\a\7\2\j\b\b\c\3\z\b\l\j\x\c\4\z\8\8\c\1\p\k\5\s\2\7\l\r\0\y\c\j\k\j\z\a\h\n\q\c\d\3\x\s\8\l\4\d\g\e\0\5\h\2\9\p\3\u\o\f\q\p\b\1\7\4\a\b\0\h\o\7\2\s\g\7\d\w\e\1\6\t\5\d\5\v\a\b\l\8\g\9\4\6\9\w\s\p\k\w\y\0\b\3\k\5\m\6\3\v\r\3\b\8\b\d\3\v\0\g\2\o\b\a\k\q\3\8\4\1\s\4\u\7\e\s\3\n\m\j\y\0\k\0\r\v\8\p\m\9\y\6\c\9\c\m\5\n\p\f\0\c\f\c\f\4\v\b\d\m\7\0\m\h\j\b\6\e\d\w\v\1\6\n\u\h\i\x\z\r\v\c\1\z\m\u\d\q\9\4\g\u\2\v\q\r\w\9\c\1\x\c\x\7\q\j\r\r\u\l\6\w\n\i\p\j\8\l\0\h\1\9\h\d\r\b\8\a\r\c\z\r\9\d\7\x\q\z ]] 00:07:23.168 15:30:24 -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:07:23.168 15:30:24 -- dd/uring.sh@69 -- # [[ wxst05y2e0kcy8yqh6f2g81mqc1tbrucz6xu3h8k9z5g3hjnprlq5f9tio44i1ce7u3zhjwfrnnwb1vrdbxpjsqa0mqe5x6h123v9nagvk0b5zeixu0xjvi5wx0u7x4zamhyodf3odcdpxjxhnzy3zh4sdpmjmsxpox6y0bcapqvpgoxuyrog9c0ef983c1fjvqfghslvzhrz2wrjh1hqi9t6nfcwbrzdgk0sckt1qwm3qyl6kz38asi5y5gcnc3eha33du67n3fn2jr9mp3d7d20qe8j4q7esebcprd6wdftsxp5jdqe4oyfuwjops1hg87w51afhqutywuo631i1j656w6ernux74ywpwq3ihndz8rgg4yjp02uzm892iowsca5lj6wyxwtxeay5gdbue65guqko3e2qc36prse8c493xesobn1lzjmtab8f6vpg8zhy7x3bl6689vhdyeopqkfftmsehjg7leeqa64igd441jghosou9p82ulhmvgj3gwrtwvfm6w24tldgtls3a9m6ykgu152fz8yc68rfqu06lwilf2dj4dknuiqpf3jce1j2t4h3gd5cdr6e6wgdacgxx54nm7udsghfamca8ml3nnq8anpesuxcr7qu7b1tl78z7tbbn7fwmgdhc92ldzonkewx3kpju89b8hibhfaf8ou1v554lkmsm0ut11v8nqkuee6p70fbip8m9p509q8g1gilz3dgiww3xe5yfyejgml7sw7wbqkdv2nmts8cethcsza72jbbc3zbljxc4z88c1pk5s27lr0ycjkjzahnqcd3xs8l4dge05h29p3uofqpb174ab0ho72sg7dwe16t5d5vabl8g9469wspkwy0b3k5m63vr3b8bd3v0g2obakq3841s4u7es3nmjy0k0rv8pm9y6c9cm5npf0cfcf4vbdm70mhjb6edwv16nuhixzrvc1zmudq94gu2vqrw9c1xcx7qjrrul6wnipj8l0h19hdrb8arczr9d7xqz == \w\x\s\t\0\5\y\2\e\0\k\c\y\8\y\q\h\6\f\2\g\8\1\m\q\c\1\t\b\r\u\c\z\6\x\u\3\h\8\k\9\z\5\g\3\h\j\n\p\r\l\q\5\f\9\t\i\o\4\4\i\1\c\e\7\u\3\z\h\j\w\f\r\n\n\w\b\1\v\r\d\b\x\p\j\s\q\a\0\m\q\e\5\x\6\h\1\2\3\v\9\n\a\g\v\k\0\b\5\z\e\i\x\u\0\x\j\v\i\5\w\x\0\u\7\x\4\z\a\m\h\y\o\d\f\3\o\d\c\d\p\x\j\x\h\n\z\y\3\z\h\4\s\d\p\m\j\m\s\x\p\o\x\6\y\0\b\c\a\p\q\v\p\g\o\x\u\y\r\o\g\9\c\0\e\f\9\8\3\c\1\f\j\v\q\f\g\h\s\l\v\z\h\r\z\2\w\r\j\h\1\h\q\i\9\t\6\n\f\c\w\b\r\z\d\g\k\0\s\c\k\t\1\q\w\m\3\q\y\l\6\k\z\3\8\a\s\i\5\y\5\g\c\n\c\3\e\h\a\3\3\d\u\6\7\n\3\f\n\2\j\r\9\m\p\3\d\7\d\2\0\q\e\8\j\4\q\7\e\s\e\b\c\p\r\d\6\w\d\f\t\s\x\p\5\j\d\q\e\4\o\y\f\u\w\j\o\p\s\1\h\g\8\7\w\5\1\a\f\h\q\u\t\y\w\u\o\6\3\1\i\1\j\6\5\6\w\6\e\r\n\u\x\7\4\y\w\p\w\q\3\i\h\n\d\z\8\r\g\g\4\y\j\p\0\2\u\z\m\8\9\2\i\o\w\s\c\a\5\l\j\6\w\y\x\w\t\x\e\a\y\5\g\d\b\u\e\6\5\g\u\q\k\o\3\e\2\q\c\3\6\p\r\s\e\8\c\4\9\3\x\e\s\o\b\n\1\l\z\j\m\t\a\b\8\f\6\v\p\g\8\z\h\y\7\x\3\b\l\6\6\8\9\v\h\d\y\e\o\p\q\k\f\f\t\m\s\e\h\j\g\7\l\e\e\q\a\6\4\i\g\d\4\4\1\j\g\h\o\s\o\u\9\p\8\2\u\l\h\m\v\g\j\3\g\w\r\t\w\v\f\m\6\w\2\4\t\l\d\g\t\l\s\3\a\9\m\6\y\k\g\u\1\5\2\f\z\8\y\c\6\8\r\f\q\u\0\6\l\w\i\l\f\2\d\j\4\d\k\n\u\i\q\p\f\3\j\c\e\1\j\2\t\4\h\3\g\d\5\c\d\r\6\e\6\w\g\d\a\c\g\x\x\5\4\n\m\7\u\d\s\g\h\f\a\m\c\a\8\m\l\3\n\n\q\8\a\n\p\e\s\u\x\c\r\7\q\u\7\b\1\t\l\7\8\z\7\t\b\b\n\7\f\w\m\g\d\h\c\9\2\l\d\z\o\n\k\e\w\x\3\k\p\j\u\8\9\b\8\h\i\b\h\f\a\f\8\o\u\1\v\5\5\4\l\k\m\s\m\0\u\t\1\1\v\8\n\q\k\u\e\e\6\p\7\0\f\b\i\p\8\m\9\p\5\0\9\q\8\g\1\g\i\l\z\3\d\g\i\w\w\3\x\e\5\y\f\y\e\j\g\m\l\7\s\w\7\w\b\q\k\d\v\2\n\m\t\s\8\c\e\t\h\c\s\z\a\7\2\j\b\b\c\3\z\b\l\j\x\c\4\z\8\8\c\1\p\k\5\s\2\7\l\r\0\y\c\j\k\j\z\a\h\n\q\c\d\3\x\s\8\l\4\d\g\e\0\5\h\2\9\p\3\u\o\f\q\p\b\1\7\4\a\b\0\h\o\7\2\s\g\7\d\w\e\1\6\t\5\d\5\v\a\b\l\8\g\9\4\6\9\w\s\p\k\w\y\0\b\3\k\5\m\6\3\v\r\3\b\8\b\d\3\v\0\g\2\o\b\a\k\q\3\8\4\1\s\4\u\7\e\s\3\n\m\j\y\0\k\0\r\v\8\p\m\9\y\6\c\9\c\m\5\n\p\f\0\c\f\c\f\4\v\b\d\m\7\0\m\h\j\b\6\e\d\w\v\1\6\n\u\h\i\x\z\r\v\c\1\z\m\u\d\q\9\4\g\u\2\v\q\r\w\9\c\1\x\c\x\7\q\j\r\r\u\l\6\w\n\i\p\j\8\l\0\h\1\9\h\d\r\b\8\a\r\c\z\r\9\d\7\x\q\z ]] 00:07:23.168 15:30:24 -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:23.427 15:30:24 -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:07:23.427 15:30:24 -- dd/uring.sh@75 -- # gen_conf 00:07:23.427 15:30:24 -- dd/common.sh@31 -- # xtrace_disable 00:07:23.427 15:30:24 -- common/autotest_common.sh@10 -- # set +x 00:07:23.427 [2024-04-17 15:30:24.656882] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:07:23.427 [2024-04-17 15:30:24.656989] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64101 ] 00:07:23.427 { 00:07:23.427 "subsystems": [ 00:07:23.427 { 00:07:23.427 "subsystem": "bdev", 00:07:23.427 "config": [ 00:07:23.427 { 00:07:23.427 "params": { 00:07:23.427 "block_size": 512, 00:07:23.427 "num_blocks": 1048576, 00:07:23.427 "name": "malloc0" 00:07:23.427 }, 00:07:23.427 "method": "bdev_malloc_create" 00:07:23.427 }, 00:07:23.427 { 00:07:23.427 "params": { 00:07:23.427 "filename": "/dev/zram1", 00:07:23.427 "name": "uring0" 00:07:23.427 }, 00:07:23.427 "method": "bdev_uring_create" 00:07:23.427 }, 00:07:23.427 { 00:07:23.427 "method": "bdev_wait_for_examine" 00:07:23.427 } 00:07:23.427 ] 00:07:23.427 } 00:07:23.427 ] 00:07:23.427 } 00:07:23.427 [2024-04-17 15:30:24.791440] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.685 [2024-04-17 15:30:24.932978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.578  Copying: 161/512 [MB] (161 MBps) Copying: 331/512 [MB] (170 MBps) Copying: 502/512 [MB] (171 MBps) Copying: 512/512 [MB] (average 167 MBps) 00:07:27.578 00:07:27.578 15:30:28 -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:07:27.578 15:30:28 -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:07:27.578 15:30:28 -- dd/uring.sh@87 -- # : 00:07:27.578 15:30:28 -- dd/uring.sh@87 -- # : 00:07:27.578 15:30:28 -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:07:27.578 15:30:28 -- dd/uring.sh@87 -- # gen_conf 00:07:27.578 15:30:28 -- dd/common.sh@31 -- # xtrace_disable 00:07:27.578 15:30:28 -- common/autotest_common.sh@10 -- # set +x 00:07:27.578 [2024-04-17 15:30:28.994592] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:07:27.578 [2024-04-17 15:30:28.994695] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64163 ] 00:07:27.578 { 00:07:27.578 "subsystems": [ 00:07:27.578 { 00:07:27.578 "subsystem": "bdev", 00:07:27.578 "config": [ 00:07:27.578 { 00:07:27.578 "params": { 00:07:27.578 "block_size": 512, 00:07:27.578 "num_blocks": 1048576, 00:07:27.578 "name": "malloc0" 00:07:27.578 }, 00:07:27.578 "method": "bdev_malloc_create" 00:07:27.578 }, 00:07:27.578 { 00:07:27.578 "params": { 00:07:27.578 "filename": "/dev/zram1", 00:07:27.578 "name": "uring0" 00:07:27.578 }, 00:07:27.578 "method": "bdev_uring_create" 00:07:27.578 }, 00:07:27.578 { 00:07:27.578 "params": { 00:07:27.578 "name": "uring0" 00:07:27.578 }, 00:07:27.578 "method": "bdev_uring_delete" 00:07:27.578 }, 00:07:27.578 { 00:07:27.578 "method": "bdev_wait_for_examine" 00:07:27.578 } 00:07:27.578 ] 00:07:27.578 } 00:07:27.578 ] 00:07:27.578 } 00:07:27.837 [2024-04-17 15:30:29.198851] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.096 [2024-04-17 15:30:29.337524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.923  Copying: 0/0 [B] (average 0 Bps) 00:07:28.923 00:07:28.923 15:30:30 -- dd/uring.sh@94 -- # : 00:07:28.923 15:30:30 -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:28.923 15:30:30 -- common/autotest_common.sh@638 -- # local es=0 00:07:28.923 15:30:30 -- dd/uring.sh@94 -- # gen_conf 00:07:28.923 15:30:30 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:28.923 15:30:30 -- dd/common.sh@31 -- # xtrace_disable 00:07:28.923 15:30:30 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.923 15:30:30 -- common/autotest_common.sh@10 -- # set +x 00:07:28.923 15:30:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:28.923 15:30:30 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.923 15:30:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:28.923 15:30:30 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.923 15:30:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:28.923 15:30:30 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.923 15:30:30 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:28.923 15:30:30 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:28.923 [2024-04-17 15:30:30.327428] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:07:28.923 [2024-04-17 15:30:30.328433] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64202 ] 00:07:28.923 { 00:07:28.923 "subsystems": [ 00:07:28.923 { 00:07:28.923 "subsystem": "bdev", 00:07:28.923 "config": [ 00:07:28.923 { 00:07:28.923 "params": { 00:07:28.923 "block_size": 512, 00:07:28.923 "num_blocks": 1048576, 00:07:28.923 "name": "malloc0" 00:07:28.923 }, 00:07:28.923 "method": "bdev_malloc_create" 00:07:28.923 }, 00:07:28.923 { 00:07:28.923 "params": { 00:07:28.923 "filename": "/dev/zram1", 00:07:28.923 "name": "uring0" 00:07:28.923 }, 00:07:28.923 "method": "bdev_uring_create" 00:07:28.923 }, 00:07:28.923 { 00:07:28.923 "params": { 00:07:28.923 "name": "uring0" 00:07:28.923 }, 00:07:28.923 "method": "bdev_uring_delete" 00:07:28.923 }, 00:07:28.923 { 00:07:28.923 "method": "bdev_wait_for_examine" 00:07:28.923 } 00:07:28.923 ] 00:07:28.923 } 00:07:28.923 ] 00:07:28.923 } 00:07:29.182 [2024-04-17 15:30:30.469487] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.182 [2024-04-17 15:30:30.599990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.750 [2024-04-17 15:30:30.930017] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:07:29.750 [2024-04-17 15:30:30.930069] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:07:29.750 [2024-04-17 15:30:30.930081] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:07:29.750 [2024-04-17 15:30:30.930092] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:30.008 [2024-04-17 15:30:31.382435] spdk_dd.c:1535:main: *ERROR*: Error occurred while performing copy 00:07:30.267 15:30:31 -- common/autotest_common.sh@641 -- # es=237 00:07:30.267 15:30:31 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:30.267 15:30:31 -- common/autotest_common.sh@650 -- # es=109 00:07:30.267 15:30:31 -- common/autotest_common.sh@651 -- # case "$es" in 00:07:30.267 15:30:31 -- common/autotest_common.sh@658 -- # es=1 00:07:30.267 15:30:31 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:30.267 15:30:31 -- dd/uring.sh@99 -- # remove_zram_dev 1 00:07:30.267 15:30:31 -- dd/common.sh@172 -- # local id=1 00:07:30.267 15:30:31 -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:07:30.267 15:30:31 -- dd/common.sh@176 -- # echo 1 00:07:30.267 15:30:31 -- dd/common.sh@177 -- # echo 1 00:07:30.267 15:30:31 -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:30.526 ************************************ 00:07:30.526 END TEST dd_uring_copy 00:07:30.526 ************************************ 00:07:30.526 00:07:30.526 real 0m16.918s 00:07:30.526 user 0m11.349s 00:07:30.526 sys 0m13.329s 00:07:30.526 15:30:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:30.526 15:30:31 -- common/autotest_common.sh@10 -- # set +x 00:07:30.526 ************************************ 00:07:30.526 END TEST spdk_dd_uring 00:07:30.526 ************************************ 00:07:30.526 00:07:30.526 real 0m17.133s 00:07:30.526 user 0m11.436s 00:07:30.526 sys 0m13.438s 00:07:30.526 15:30:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:30.526 15:30:31 -- common/autotest_common.sh@10 -- # set +x 00:07:30.526 15:30:31 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:30.526 15:30:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:30.526 15:30:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:30.526 15:30:31 -- common/autotest_common.sh@10 -- # set +x 00:07:30.785 ************************************ 00:07:30.785 START TEST spdk_dd_sparse 00:07:30.785 ************************************ 00:07:30.785 15:30:31 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:30.785 * Looking for test storage... 00:07:30.785 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:30.785 15:30:32 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:30.786 15:30:32 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.786 15:30:32 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.786 15:30:32 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.786 15:30:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.786 15:30:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.786 15:30:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.786 15:30:32 -- paths/export.sh@5 -- # export PATH 00:07:30.786 15:30:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.786 15:30:32 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:07:30.786 15:30:32 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:07:30.786 15:30:32 -- dd/sparse.sh@110 -- # file1=file_zero1 00:07:30.786 15:30:32 -- dd/sparse.sh@111 -- # file2=file_zero2 00:07:30.786 15:30:32 -- dd/sparse.sh@112 -- # file3=file_zero3 00:07:30.786 15:30:32 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:07:30.786 15:30:32 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:07:30.786 15:30:32 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:07:30.786 15:30:32 -- dd/sparse.sh@118 -- # prepare 00:07:30.786 15:30:32 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:07:30.786 15:30:32 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:07:30.786 1+0 records in 00:07:30.786 1+0 records out 00:07:30.786 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0066582 s, 630 MB/s 00:07:30.786 15:30:32 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:07:30.786 1+0 records in 00:07:30.786 1+0 records out 00:07:30.786 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0085684 s, 490 MB/s 00:07:30.786 15:30:32 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:07:30.786 1+0 records in 00:07:30.786 1+0 records out 00:07:30.786 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00822435 s, 510 MB/s 00:07:30.786 15:30:32 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:07:30.786 15:30:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:30.786 15:30:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:30.786 15:30:32 -- common/autotest_common.sh@10 -- # set +x 00:07:30.786 ************************************ 00:07:30.786 START TEST dd_sparse_file_to_file 00:07:30.786 ************************************ 00:07:30.786 15:30:32 -- common/autotest_common.sh@1111 -- # file_to_file 00:07:30.786 15:30:32 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:07:30.786 15:30:32 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:07:30.786 15:30:32 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:30.786 15:30:32 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:07:30.786 15:30:32 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:07:30.786 15:30:32 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:07:30.786 15:30:32 -- dd/sparse.sh@41 -- # gen_conf 00:07:30.786 15:30:32 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:07:30.786 15:30:32 -- dd/common.sh@31 -- # xtrace_disable 00:07:30.786 15:30:32 -- common/autotest_common.sh@10 -- # set +x 00:07:31.044 [2024-04-17 15:30:32.248058] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:07:31.044 [2024-04-17 15:30:32.248377] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64302 ] 00:07:31.044 { 00:07:31.044 "subsystems": [ 00:07:31.044 { 00:07:31.044 "subsystem": "bdev", 00:07:31.044 "config": [ 00:07:31.044 { 00:07:31.044 "params": { 00:07:31.044 "block_size": 4096, 00:07:31.044 "filename": "dd_sparse_aio_disk", 00:07:31.044 "name": "dd_aio" 00:07:31.044 }, 00:07:31.044 "method": "bdev_aio_create" 00:07:31.044 }, 00:07:31.044 { 00:07:31.044 "params": { 00:07:31.044 "lvs_name": "dd_lvstore", 00:07:31.044 "bdev_name": "dd_aio" 00:07:31.044 }, 00:07:31.044 "method": "bdev_lvol_create_lvstore" 00:07:31.044 }, 00:07:31.044 { 00:07:31.044 "method": "bdev_wait_for_examine" 00:07:31.044 } 00:07:31.044 ] 00:07:31.044 } 00:07:31.044 ] 00:07:31.044 } 00:07:31.044 [2024-04-17 15:30:32.388487] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.303 [2024-04-17 15:30:32.495479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.870  Copying: 12/36 [MB] (average 750 MBps) 00:07:31.870 00:07:31.870 15:30:33 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:07:31.870 15:30:33 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:07:31.870 15:30:33 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:07:31.870 15:30:33 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:07:31.870 15:30:33 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:31.870 15:30:33 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:07:31.870 15:30:33 -- dd/sparse.sh@52 -- # stat1_b=24576 00:07:31.870 15:30:33 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:07:31.870 ************************************ 00:07:31.870 END TEST dd_sparse_file_to_file 00:07:31.870 ************************************ 00:07:31.870 15:30:33 -- dd/sparse.sh@53 -- # stat2_b=24576 00:07:31.870 15:30:33 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:31.870 00:07:31.870 real 0m0.879s 00:07:31.870 user 0m0.563s 00:07:31.870 sys 0m0.471s 00:07:31.870 15:30:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:31.870 15:30:33 -- common/autotest_common.sh@10 -- # set +x 00:07:31.870 15:30:33 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:07:31.870 15:30:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:31.870 15:30:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:31.870 15:30:33 -- common/autotest_common.sh@10 -- # set +x 00:07:31.870 ************************************ 00:07:31.870 START TEST dd_sparse_file_to_bdev 00:07:31.870 ************************************ 00:07:31.870 15:30:33 -- common/autotest_common.sh@1111 -- # file_to_bdev 00:07:31.870 15:30:33 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:31.870 15:30:33 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:07:31.870 15:30:33 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:07:31.870 15:30:33 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:07:31.870 15:30:33 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:07:31.870 15:30:33 -- dd/sparse.sh@73 -- # gen_conf 00:07:31.870 15:30:33 -- dd/common.sh@31 -- # xtrace_disable 00:07:31.870 15:30:33 -- common/autotest_common.sh@10 -- # set +x 00:07:31.870 [2024-04-17 15:30:33.242264] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:07:31.870 [2024-04-17 15:30:33.242354] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64354 ] 00:07:31.870 { 00:07:31.870 "subsystems": [ 00:07:31.870 { 00:07:31.870 "subsystem": "bdev", 00:07:31.870 "config": [ 00:07:31.870 { 00:07:31.870 "params": { 00:07:31.870 "block_size": 4096, 00:07:31.870 "filename": "dd_sparse_aio_disk", 00:07:31.870 "name": "dd_aio" 00:07:31.870 }, 00:07:31.870 "method": "bdev_aio_create" 00:07:31.870 }, 00:07:31.870 { 00:07:31.870 "params": { 00:07:31.870 "lvs_name": "dd_lvstore", 00:07:31.870 "lvol_name": "dd_lvol", 00:07:31.870 "size": 37748736, 00:07:31.870 "thin_provision": true 00:07:31.870 }, 00:07:31.870 "method": "bdev_lvol_create" 00:07:31.870 }, 00:07:31.870 { 00:07:31.870 "method": "bdev_wait_for_examine" 00:07:31.870 } 00:07:31.870 ] 00:07:31.870 } 00:07:31.870 ] 00:07:31.870 } 00:07:32.129 [2024-04-17 15:30:33.374317] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.129 [2024-04-17 15:30:33.511145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.386 [2024-04-17 15:30:33.639733] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:07:32.387  Copying: 12/36 [MB] (average 413 MBps)[2024-04-17 15:30:33.690712] app.c: 930:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:07:32.645 00:07:32.645 00:07:32.645 00:07:32.645 real 0m0.848s 00:07:32.645 user 0m0.564s 00:07:32.645 sys 0m0.448s 00:07:32.645 15:30:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:32.645 ************************************ 00:07:32.645 END TEST dd_sparse_file_to_bdev 00:07:32.645 ************************************ 00:07:32.645 15:30:34 -- common/autotest_common.sh@10 -- # set +x 00:07:32.645 15:30:34 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:07:32.645 15:30:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:32.645 15:30:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:32.645 15:30:34 -- common/autotest_common.sh@10 -- # set +x 00:07:32.903 ************************************ 00:07:32.903 START TEST dd_sparse_bdev_to_file 00:07:32.903 ************************************ 00:07:32.903 15:30:34 -- common/autotest_common.sh@1111 -- # bdev_to_file 00:07:32.903 15:30:34 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:07:32.903 15:30:34 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:07:32.903 15:30:34 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:32.903 15:30:34 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:07:32.903 15:30:34 -- dd/sparse.sh@91 -- # gen_conf 00:07:32.903 15:30:34 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:07:32.903 15:30:34 -- dd/common.sh@31 -- # xtrace_disable 00:07:32.903 15:30:34 -- common/autotest_common.sh@10 -- # set +x 00:07:32.903 [2024-04-17 15:30:34.212046] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:07:32.903 [2024-04-17 15:30:34.212130] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64396 ] 00:07:32.903 { 00:07:32.903 "subsystems": [ 00:07:32.903 { 00:07:32.903 "subsystem": "bdev", 00:07:32.903 "config": [ 00:07:32.903 { 00:07:32.903 "params": { 00:07:32.903 "block_size": 4096, 00:07:32.903 "filename": "dd_sparse_aio_disk", 00:07:32.903 "name": "dd_aio" 00:07:32.903 }, 00:07:32.903 "method": "bdev_aio_create" 00:07:32.903 }, 00:07:32.903 { 00:07:32.903 "method": "bdev_wait_for_examine" 00:07:32.903 } 00:07:32.903 ] 00:07:32.903 } 00:07:32.903 ] 00:07:32.903 } 00:07:33.161 [2024-04-17 15:30:34.350485] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.161 [2024-04-17 15:30:34.477501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.677  Copying: 12/36 [MB] (average 857 MBps) 00:07:33.677 00:07:33.677 15:30:34 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:07:33.677 15:30:34 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:07:33.677 15:30:34 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:07:33.677 15:30:35 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:07:33.677 15:30:35 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:33.677 15:30:35 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:07:33.677 15:30:35 -- dd/sparse.sh@102 -- # stat2_b=24576 00:07:33.677 15:30:35 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:07:33.677 ************************************ 00:07:33.677 END TEST dd_sparse_bdev_to_file 00:07:33.677 ************************************ 00:07:33.677 15:30:35 -- dd/sparse.sh@103 -- # stat3_b=24576 00:07:33.677 15:30:35 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:33.677 00:07:33.677 real 0m0.856s 00:07:33.677 user 0m0.561s 00:07:33.677 sys 0m0.442s 00:07:33.677 15:30:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:33.677 15:30:35 -- common/autotest_common.sh@10 -- # set +x 00:07:33.677 15:30:35 -- dd/sparse.sh@1 -- # cleanup 00:07:33.677 15:30:35 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:07:33.677 15:30:35 -- dd/sparse.sh@12 -- # rm file_zero1 00:07:33.677 15:30:35 -- dd/sparse.sh@13 -- # rm file_zero2 00:07:33.677 15:30:35 -- dd/sparse.sh@14 -- # rm file_zero3 00:07:33.677 ************************************ 00:07:33.677 END TEST spdk_dd_sparse 00:07:33.677 ************************************ 00:07:33.677 00:07:33.677 real 0m3.102s 00:07:33.677 user 0m1.860s 00:07:33.677 sys 0m1.656s 00:07:33.677 15:30:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:33.677 15:30:35 -- common/autotest_common.sh@10 -- # set +x 00:07:33.677 15:30:35 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:33.677 15:30:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:33.677 15:30:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:33.677 15:30:35 -- common/autotest_common.sh@10 -- # set +x 00:07:33.934 ************************************ 00:07:33.934 START TEST spdk_dd_negative 00:07:33.934 ************************************ 00:07:33.934 15:30:35 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:33.934 * Looking for test storage... 00:07:33.934 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:33.934 15:30:35 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:33.934 15:30:35 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:33.934 15:30:35 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:33.934 15:30:35 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:33.934 15:30:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.934 15:30:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.934 15:30:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.935 15:30:35 -- paths/export.sh@5 -- # export PATH 00:07:33.935 15:30:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.935 15:30:35 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:33.935 15:30:35 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:33.935 15:30:35 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:33.935 15:30:35 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:33.935 15:30:35 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:07:33.935 15:30:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:33.935 15:30:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:33.935 15:30:35 -- common/autotest_common.sh@10 -- # set +x 00:07:33.935 ************************************ 00:07:33.935 START TEST dd_invalid_arguments 00:07:33.935 ************************************ 00:07:33.935 15:30:35 -- common/autotest_common.sh@1111 -- # invalid_arguments 00:07:33.935 15:30:35 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:33.935 15:30:35 -- common/autotest_common.sh@638 -- # local es=0 00:07:33.935 15:30:35 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:33.935 15:30:35 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.935 15:30:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:33.935 15:30:35 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.935 15:30:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:33.935 15:30:35 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.935 15:30:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:33.935 15:30:35 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.935 15:30:35 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:33.935 15:30:35 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:34.194 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:07:34.194 options: 00:07:34.194 -c, --config JSON config file 00:07:34.194 --json JSON config file 00:07:34.194 --json-ignore-init-errors 00:07:34.194 don't exit on invalid config entry 00:07:34.194 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:34.194 -g, --single-file-segments 00:07:34.194 force creating just one hugetlbfs file 00:07:34.194 -h, --help show this usage 00:07:34.194 -i, --shm-id shared memory ID (optional) 00:07:34.194 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:07:34.194 --lcores lcore to CPU mapping list. The list is in the format: 00:07:34.194 [<,lcores[@CPUs]>...] 00:07:34.194 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:34.194 Within the group, '-' is used for range separator, 00:07:34.194 ',' is used for single number separator. 00:07:34.194 '( )' can be omitted for single element group, 00:07:34.194 '@' can be omitted if cpus and lcores have the same value 00:07:34.194 -n, --mem-channels channel number of memory channels used for DPDK 00:07:34.194 -p, --main-core main (primary) core for DPDK 00:07:34.194 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:34.194 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:34.194 --disable-cpumask-locks Disable CPU core lock files. 00:07:34.194 --silence-noticelog disable notice level logging to stderr 00:07:34.194 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:34.194 -u, --no-pci disable PCI access 00:07:34.194 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:34.194 --max-delay maximum reactor delay (in microseconds) 00:07:34.194 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:34.194 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:34.194 -R, --huge-unlink unlink huge files after initialization 00:07:34.194 -v, --version print SPDK version 00:07:34.194 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:34.194 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:34.194 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:34.194 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:07:34.194 Tracepoints vary in size and can use more than one trace entry. 00:07:34.194 --rpcs-allowed comma-separated list of permitted RPCS 00:07:34.194 --env-context Opaque context for use of the env implementation 00:07:34.194 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:34.194 --no-huge run without using hugepages 00:07:34.194 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:07:34.194 -e, --tpoint-group [:] 00:07:34.194 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all) 00:07:34.194 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:07:34.194 Groups and masks can be combined (e.g/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:07:34.194 [2024-04-17 15:30:35.412689] spdk_dd.c:1479:main: *ERROR*: Invalid arguments 00:07:34.194 . thread,bdev:0x1). 00:07:34.194 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:07:34.194 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:07:34.194 [--------- DD Options ---------] 00:07:34.194 --if Input file. Must specify either --if or --ib. 00:07:34.194 --ib Input bdev. Must specifier either --if or --ib 00:07:34.194 --of Output file. Must specify either --of or --ob. 00:07:34.194 --ob Output bdev. Must specify either --of or --ob. 00:07:34.194 --iflag Input file flags. 00:07:34.194 --oflag Output file flags. 00:07:34.194 --bs I/O unit size (default: 4096) 00:07:34.194 --qd Queue depth (default: 2) 00:07:34.194 --count I/O unit count. The number of I/O units to copy. (default: all) 00:07:34.194 --skip Skip this many I/O units at start of input. (default: 0) 00:07:34.194 --seek Skip this many I/O units at start of output. (default: 0) 00:07:34.194 --aio Force usage of AIO. (by default io_uring is used if available) 00:07:34.194 --sparse Enable hole skipping in input target 00:07:34.194 Available iflag and oflag values: 00:07:34.194 append - append mode 00:07:34.194 direct - use direct I/O for data 00:07:34.194 directory - fail unless a directory 00:07:34.194 dsync - use synchronized I/O for data 00:07:34.194 noatime - do not update access time 00:07:34.194 noctty - do not assign controlling terminal from file 00:07:34.194 nofollow - do not follow symlinks 00:07:34.194 nonblock - use non-blocking I/O 00:07:34.194 sync - use synchronized I/O for data and metadata 00:07:34.194 15:30:35 -- common/autotest_common.sh@641 -- # es=2 00:07:34.194 15:30:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:34.194 15:30:35 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:34.194 15:30:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:34.194 ************************************ 00:07:34.194 END TEST dd_invalid_arguments 00:07:34.194 ************************************ 00:07:34.194 00:07:34.194 real 0m0.072s 00:07:34.194 user 0m0.042s 00:07:34.194 sys 0m0.028s 00:07:34.194 15:30:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:34.194 15:30:35 -- common/autotest_common.sh@10 -- # set +x 00:07:34.194 15:30:35 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:07:34.194 15:30:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:34.194 15:30:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:34.194 15:30:35 -- common/autotest_common.sh@10 -- # set +x 00:07:34.194 ************************************ 00:07:34.194 START TEST dd_double_input 00:07:34.194 ************************************ 00:07:34.194 15:30:35 -- common/autotest_common.sh@1111 -- # double_input 00:07:34.194 15:30:35 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:34.194 15:30:35 -- common/autotest_common.sh@638 -- # local es=0 00:07:34.194 15:30:35 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:34.194 15:30:35 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.194 15:30:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:34.194 15:30:35 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.194 15:30:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:34.194 15:30:35 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.194 15:30:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:34.194 15:30:35 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.194 15:30:35 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:34.194 15:30:35 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:34.194 [2024-04-17 15:30:35.601560] spdk_dd.c:1486:main: *ERROR*: You may specify either --if or --ib, but not both. 00:07:34.194 15:30:35 -- common/autotest_common.sh@641 -- # es=22 00:07:34.194 15:30:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:34.194 15:30:35 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:34.194 15:30:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:34.194 00:07:34.194 real 0m0.061s 00:07:34.194 user 0m0.035s 00:07:34.194 sys 0m0.026s 00:07:34.194 15:30:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:34.194 ************************************ 00:07:34.194 END TEST dd_double_input 00:07:34.194 ************************************ 00:07:34.194 15:30:35 -- common/autotest_common.sh@10 -- # set +x 00:07:34.453 15:30:35 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:07:34.453 15:30:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:34.453 15:30:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:34.453 15:30:35 -- common/autotest_common.sh@10 -- # set +x 00:07:34.453 ************************************ 00:07:34.453 START TEST dd_double_output 00:07:34.453 ************************************ 00:07:34.453 15:30:35 -- common/autotest_common.sh@1111 -- # double_output 00:07:34.453 15:30:35 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:34.453 15:30:35 -- common/autotest_common.sh@638 -- # local es=0 00:07:34.453 15:30:35 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:34.453 15:30:35 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.453 15:30:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:34.453 15:30:35 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.453 15:30:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:34.453 15:30:35 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.453 15:30:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:34.453 15:30:35 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.453 15:30:35 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:34.453 15:30:35 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:34.453 [2024-04-17 15:30:35.780059] spdk_dd.c:1492:main: *ERROR*: You may specify either --of or --ob, but not both. 00:07:34.453 15:30:35 -- common/autotest_common.sh@641 -- # es=22 00:07:34.453 15:30:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:34.453 15:30:35 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:34.453 15:30:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:34.453 00:07:34.453 real 0m0.059s 00:07:34.453 user 0m0.035s 00:07:34.453 sys 0m0.023s 00:07:34.453 15:30:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:34.453 ************************************ 00:07:34.453 END TEST dd_double_output 00:07:34.453 ************************************ 00:07:34.453 15:30:35 -- common/autotest_common.sh@10 -- # set +x 00:07:34.453 15:30:35 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:07:34.453 15:30:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:34.453 15:30:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:34.453 15:30:35 -- common/autotest_common.sh@10 -- # set +x 00:07:34.711 ************************************ 00:07:34.711 START TEST dd_no_input 00:07:34.711 ************************************ 00:07:34.711 15:30:35 -- common/autotest_common.sh@1111 -- # no_input 00:07:34.711 15:30:35 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:34.711 15:30:35 -- common/autotest_common.sh@638 -- # local es=0 00:07:34.711 15:30:35 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:34.711 15:30:35 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.711 15:30:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:34.711 15:30:35 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.711 15:30:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:34.711 15:30:35 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.711 15:30:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:34.711 15:30:35 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.711 15:30:35 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:34.711 15:30:35 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:34.711 [2024-04-17 15:30:35.967297] spdk_dd.c:1498:main: *ERROR*: You must specify either --if or --ib 00:07:34.711 ************************************ 00:07:34.711 END TEST dd_no_input 00:07:34.711 ************************************ 00:07:34.711 15:30:35 -- common/autotest_common.sh@641 -- # es=22 00:07:34.711 15:30:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:34.711 15:30:35 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:34.711 15:30:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:34.711 00:07:34.711 real 0m0.072s 00:07:34.711 user 0m0.039s 00:07:34.711 sys 0m0.033s 00:07:34.711 15:30:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:34.711 15:30:35 -- common/autotest_common.sh@10 -- # set +x 00:07:34.711 15:30:36 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:07:34.711 15:30:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:34.711 15:30:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:34.711 15:30:36 -- common/autotest_common.sh@10 -- # set +x 00:07:34.711 ************************************ 00:07:34.711 START TEST dd_no_output 00:07:34.711 ************************************ 00:07:34.711 15:30:36 -- common/autotest_common.sh@1111 -- # no_output 00:07:34.711 15:30:36 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:34.711 15:30:36 -- common/autotest_common.sh@638 -- # local es=0 00:07:34.711 15:30:36 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:34.711 15:30:36 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.711 15:30:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:34.711 15:30:36 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.711 15:30:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:34.711 15:30:36 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.711 15:30:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:34.711 15:30:36 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.711 15:30:36 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:34.711 15:30:36 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:34.968 [2024-04-17 15:30:36.152445] spdk_dd.c:1504:main: *ERROR*: You must specify either --of or --ob 00:07:34.968 15:30:36 -- common/autotest_common.sh@641 -- # es=22 00:07:34.968 15:30:36 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:34.968 15:30:36 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:34.968 15:30:36 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:34.968 00:07:34.968 real 0m0.071s 00:07:34.968 user 0m0.047s 00:07:34.968 sys 0m0.023s 00:07:34.968 15:30:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:34.968 ************************************ 00:07:34.968 END TEST dd_no_output 00:07:34.968 ************************************ 00:07:34.968 15:30:36 -- common/autotest_common.sh@10 -- # set +x 00:07:34.968 15:30:36 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:07:34.968 15:30:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:34.968 15:30:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:34.969 15:30:36 -- common/autotest_common.sh@10 -- # set +x 00:07:34.969 ************************************ 00:07:34.969 START TEST dd_wrong_blocksize 00:07:34.969 ************************************ 00:07:34.969 15:30:36 -- common/autotest_common.sh@1111 -- # wrong_blocksize 00:07:34.969 15:30:36 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:34.969 15:30:36 -- common/autotest_common.sh@638 -- # local es=0 00:07:34.969 15:30:36 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:34.969 15:30:36 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.969 15:30:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:34.969 15:30:36 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.969 15:30:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:34.969 15:30:36 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.969 15:30:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:34.969 15:30:36 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.969 15:30:36 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:34.969 15:30:36 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:34.969 [2024-04-17 15:30:36.347990] spdk_dd.c:1510:main: *ERROR*: Invalid --bs value 00:07:34.969 15:30:36 -- common/autotest_common.sh@641 -- # es=22 00:07:34.969 15:30:36 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:34.969 15:30:36 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:34.969 15:30:36 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:34.969 00:07:34.969 real 0m0.075s 00:07:34.969 user 0m0.048s 00:07:34.969 sys 0m0.025s 00:07:34.969 15:30:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:34.969 ************************************ 00:07:34.969 END TEST dd_wrong_blocksize 00:07:34.969 ************************************ 00:07:34.969 15:30:36 -- common/autotest_common.sh@10 -- # set +x 00:07:34.969 15:30:36 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:07:34.969 15:30:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:34.969 15:30:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:34.969 15:30:36 -- common/autotest_common.sh@10 -- # set +x 00:07:35.226 ************************************ 00:07:35.226 START TEST dd_smaller_blocksize 00:07:35.226 ************************************ 00:07:35.226 15:30:36 -- common/autotest_common.sh@1111 -- # smaller_blocksize 00:07:35.226 15:30:36 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:35.226 15:30:36 -- common/autotest_common.sh@638 -- # local es=0 00:07:35.226 15:30:36 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:35.226 15:30:36 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.226 15:30:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:35.226 15:30:36 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.226 15:30:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:35.226 15:30:36 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.226 15:30:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:35.226 15:30:36 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.226 15:30:36 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:35.226 15:30:36 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:35.226 [2024-04-17 15:30:36.535337] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:07:35.226 [2024-04-17 15:30:36.535423] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64662 ] 00:07:35.482 [2024-04-17 15:30:36.669915] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.482 [2024-04-17 15:30:36.769758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.740 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:35.997 [2024-04-17 15:30:37.182758] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:07:35.997 [2024-04-17 15:30:37.182901] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:35.997 [2024-04-17 15:30:37.351089] spdk_dd.c:1535:main: *ERROR*: Error occurred while performing copy 00:07:36.255 15:30:37 -- common/autotest_common.sh@641 -- # es=244 00:07:36.255 15:30:37 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:36.255 15:30:37 -- common/autotest_common.sh@650 -- # es=116 00:07:36.255 15:30:37 -- common/autotest_common.sh@651 -- # case "$es" in 00:07:36.255 15:30:37 -- common/autotest_common.sh@658 -- # es=1 00:07:36.255 ************************************ 00:07:36.255 END TEST dd_smaller_blocksize 00:07:36.255 ************************************ 00:07:36.255 15:30:37 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:36.255 00:07:36.255 real 0m1.037s 00:07:36.255 user 0m0.498s 00:07:36.255 sys 0m0.432s 00:07:36.255 15:30:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:36.255 15:30:37 -- common/autotest_common.sh@10 -- # set +x 00:07:36.255 15:30:37 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:07:36.255 15:30:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:36.255 15:30:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:36.255 15:30:37 -- common/autotest_common.sh@10 -- # set +x 00:07:36.255 ************************************ 00:07:36.255 START TEST dd_invalid_count 00:07:36.255 ************************************ 00:07:36.255 15:30:37 -- common/autotest_common.sh@1111 -- # invalid_count 00:07:36.255 15:30:37 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:36.255 15:30:37 -- common/autotest_common.sh@638 -- # local es=0 00:07:36.255 15:30:37 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:36.255 15:30:37 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.255 15:30:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:36.255 15:30:37 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.255 15:30:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:36.255 15:30:37 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.256 15:30:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:36.256 15:30:37 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.256 15:30:37 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:36.256 15:30:37 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:36.256 [2024-04-17 15:30:37.688567] spdk_dd.c:1516:main: *ERROR*: Invalid --count value 00:07:36.512 15:30:37 -- common/autotest_common.sh@641 -- # es=22 00:07:36.512 15:30:37 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:36.512 15:30:37 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:36.512 15:30:37 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:36.512 00:07:36.512 real 0m0.073s 00:07:36.512 user 0m0.045s 00:07:36.512 sys 0m0.027s 00:07:36.513 ************************************ 00:07:36.513 END TEST dd_invalid_count 00:07:36.513 ************************************ 00:07:36.513 15:30:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:36.513 15:30:37 -- common/autotest_common.sh@10 -- # set +x 00:07:36.513 15:30:37 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:07:36.513 15:30:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:36.513 15:30:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:36.513 15:30:37 -- common/autotest_common.sh@10 -- # set +x 00:07:36.513 ************************************ 00:07:36.513 START TEST dd_invalid_oflag 00:07:36.513 ************************************ 00:07:36.513 15:30:37 -- common/autotest_common.sh@1111 -- # invalid_oflag 00:07:36.513 15:30:37 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:36.513 15:30:37 -- common/autotest_common.sh@638 -- # local es=0 00:07:36.513 15:30:37 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:36.513 15:30:37 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.513 15:30:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:36.513 15:30:37 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.513 15:30:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:36.513 15:30:37 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.513 15:30:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:36.513 15:30:37 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.513 15:30:37 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:36.513 15:30:37 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:36.513 [2024-04-17 15:30:37.878371] spdk_dd.c:1522:main: *ERROR*: --oflags may be used only with --of 00:07:36.513 15:30:37 -- common/autotest_common.sh@641 -- # es=22 00:07:36.513 15:30:37 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:36.513 15:30:37 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:36.513 15:30:37 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:36.513 00:07:36.513 real 0m0.072s 00:07:36.513 user 0m0.047s 00:07:36.513 sys 0m0.024s 00:07:36.513 15:30:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:36.513 15:30:37 -- common/autotest_common.sh@10 -- # set +x 00:07:36.513 ************************************ 00:07:36.513 END TEST dd_invalid_oflag 00:07:36.513 ************************************ 00:07:36.513 15:30:37 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:07:36.513 15:30:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:36.513 15:30:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:36.513 15:30:37 -- common/autotest_common.sh@10 -- # set +x 00:07:36.770 ************************************ 00:07:36.770 START TEST dd_invalid_iflag 00:07:36.770 ************************************ 00:07:36.770 15:30:38 -- common/autotest_common.sh@1111 -- # invalid_iflag 00:07:36.770 15:30:38 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:36.770 15:30:38 -- common/autotest_common.sh@638 -- # local es=0 00:07:36.770 15:30:38 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:36.770 15:30:38 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.770 15:30:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:36.770 15:30:38 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.770 15:30:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:36.770 15:30:38 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.770 15:30:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:36.770 15:30:38 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.770 15:30:38 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:36.770 15:30:38 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:36.770 [2024-04-17 15:30:38.057483] spdk_dd.c:1528:main: *ERROR*: --iflags may be used only with --if 00:07:36.770 15:30:38 -- common/autotest_common.sh@641 -- # es=22 00:07:36.770 15:30:38 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:36.770 15:30:38 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:36.770 15:30:38 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:36.770 00:07:36.770 real 0m0.068s 00:07:36.770 user 0m0.044s 00:07:36.770 sys 0m0.023s 00:07:36.770 15:30:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:36.770 ************************************ 00:07:36.770 END TEST dd_invalid_iflag 00:07:36.770 ************************************ 00:07:36.770 15:30:38 -- common/autotest_common.sh@10 -- # set +x 00:07:36.770 15:30:38 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:07:36.770 15:30:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:36.770 15:30:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:36.770 15:30:38 -- common/autotest_common.sh@10 -- # set +x 00:07:36.770 ************************************ 00:07:36.770 START TEST dd_unknown_flag 00:07:36.770 ************************************ 00:07:36.770 15:30:38 -- common/autotest_common.sh@1111 -- # unknown_flag 00:07:36.770 15:30:38 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:36.770 15:30:38 -- common/autotest_common.sh@638 -- # local es=0 00:07:36.770 15:30:38 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:36.770 15:30:38 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.770 15:30:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:36.770 15:30:38 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.770 15:30:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:36.770 15:30:38 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.770 15:30:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:36.770 15:30:38 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.770 15:30:38 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:36.770 15:30:38 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:37.028 [2024-04-17 15:30:38.251863] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:07:37.028 [2024-04-17 15:30:38.251944] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64775 ] 00:07:37.028 [2024-04-17 15:30:38.391383] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.287 [2024-04-17 15:30:38.525777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.287 [2024-04-17 15:30:38.645499] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:37.287 [2024-04-17 15:30:38.645571] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:37.287 [2024-04-17 15:30:38.645655] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:37.287 [2024-04-17 15:30:38.645669] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:37.287 [2024-04-17 15:30:38.645999] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:07:37.287 [2024-04-17 15:30:38.646017] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:37.287 [2024-04-17 15:30:38.646075] app.c: 946:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:37.287 [2024-04-17 15:30:38.646086] app.c: 946:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:37.545 [2024-04-17 15:30:38.811740] spdk_dd.c:1535:main: *ERROR*: Error occurred while performing copy 00:07:37.803 15:30:38 -- common/autotest_common.sh@641 -- # es=234 00:07:37.803 15:30:38 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:37.803 15:30:38 -- common/autotest_common.sh@650 -- # es=106 00:07:37.803 ************************************ 00:07:37.803 END TEST dd_unknown_flag 00:07:37.803 ************************************ 00:07:37.803 15:30:38 -- common/autotest_common.sh@651 -- # case "$es" in 00:07:37.803 15:30:38 -- common/autotest_common.sh@658 -- # es=1 00:07:37.803 15:30:38 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:37.803 00:07:37.803 real 0m0.796s 00:07:37.803 user 0m0.497s 00:07:37.803 sys 0m0.203s 00:07:37.804 15:30:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:37.804 15:30:38 -- common/autotest_common.sh@10 -- # set +x 00:07:37.804 15:30:39 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:07:37.804 15:30:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:37.804 15:30:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:37.804 15:30:39 -- common/autotest_common.sh@10 -- # set +x 00:07:37.804 ************************************ 00:07:37.804 START TEST dd_invalid_json 00:07:37.804 ************************************ 00:07:37.804 15:30:39 -- common/autotest_common.sh@1111 -- # invalid_json 00:07:37.804 15:30:39 -- dd/negative_dd.sh@95 -- # : 00:07:37.804 15:30:39 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:37.804 15:30:39 -- common/autotest_common.sh@638 -- # local es=0 00:07:37.804 15:30:39 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:37.804 15:30:39 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.804 15:30:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:37.804 15:30:39 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.804 15:30:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:37.804 15:30:39 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.804 15:30:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:37.804 15:30:39 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.804 15:30:39 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:37.804 15:30:39 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:37.804 [2024-04-17 15:30:39.176842] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:07:37.804 [2024-04-17 15:30:39.177009] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64813 ] 00:07:38.062 [2024-04-17 15:30:39.318314] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.062 [2024-04-17 15:30:39.457282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.062 [2024-04-17 15:30:39.457377] json_config.c: 509:parse_json: *ERROR*: JSON data cannot be empty 00:07:38.062 [2024-04-17 15:30:39.457395] rpc.c: 193:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:38.062 [2024-04-17 15:30:39.457405] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:38.062 [2024-04-17 15:30:39.457443] spdk_dd.c:1535:main: *ERROR*: Error occurred while performing copy 00:07:38.320 15:30:39 -- common/autotest_common.sh@641 -- # es=234 00:07:38.320 15:30:39 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:38.320 15:30:39 -- common/autotest_common.sh@650 -- # es=106 00:07:38.320 15:30:39 -- common/autotest_common.sh@651 -- # case "$es" in 00:07:38.320 15:30:39 -- common/autotest_common.sh@658 -- # es=1 00:07:38.320 15:30:39 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:38.320 00:07:38.320 real 0m0.518s 00:07:38.320 user 0m0.323s 00:07:38.320 sys 0m0.090s 00:07:38.320 15:30:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:38.320 ************************************ 00:07:38.320 END TEST dd_invalid_json 00:07:38.320 ************************************ 00:07:38.320 15:30:39 -- common/autotest_common.sh@10 -- # set +x 00:07:38.320 ************************************ 00:07:38.320 END TEST spdk_dd_negative 00:07:38.320 ************************************ 00:07:38.320 00:07:38.320 real 0m4.478s 00:07:38.320 user 0m2.216s 00:07:38.320 sys 0m1.744s 00:07:38.320 15:30:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:38.320 15:30:39 -- common/autotest_common.sh@10 -- # set +x 00:07:38.320 ************************************ 00:07:38.320 END TEST spdk_dd 00:07:38.320 ************************************ 00:07:38.320 00:07:38.320 real 1m33.482s 00:07:38.320 user 1m1.371s 00:07:38.320 sys 0m39.832s 00:07:38.320 15:30:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:38.320 15:30:39 -- common/autotest_common.sh@10 -- # set +x 00:07:38.320 15:30:39 -- spdk/autotest.sh@206 -- # '[' 0 -eq 1 ']' 00:07:38.320 15:30:39 -- spdk/autotest.sh@253 -- # '[' 0 -eq 1 ']' 00:07:38.320 15:30:39 -- spdk/autotest.sh@257 -- # timing_exit lib 00:07:38.320 15:30:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:38.320 15:30:39 -- common/autotest_common.sh@10 -- # set +x 00:07:38.621 15:30:39 -- spdk/autotest.sh@259 -- # '[' 0 -eq 1 ']' 00:07:38.621 15:30:39 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:38.621 15:30:39 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:07:38.621 15:30:39 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:07:38.621 15:30:39 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:07:38.621 15:30:39 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:07:38.621 15:30:39 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:38.621 15:30:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:38.621 15:30:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:38.621 15:30:39 -- common/autotest_common.sh@10 -- # set +x 00:07:38.621 ************************************ 00:07:38.621 START TEST nvmf_tcp 00:07:38.621 ************************************ 00:07:38.621 15:30:39 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:38.621 * Looking for test storage... 00:07:38.621 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:38.621 15:30:39 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:38.621 15:30:39 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:38.621 15:30:39 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:38.621 15:30:39 -- nvmf/common.sh@7 -- # uname -s 00:07:38.621 15:30:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:38.621 15:30:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:38.621 15:30:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:38.621 15:30:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:38.621 15:30:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:38.621 15:30:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:38.621 15:30:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:38.621 15:30:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:38.621 15:30:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:38.621 15:30:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:38.621 15:30:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02dfa913-00e4-4a25-ab2c-855f7283d4db 00:07:38.621 15:30:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=02dfa913-00e4-4a25-ab2c-855f7283d4db 00:07:38.621 15:30:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:38.621 15:30:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:38.621 15:30:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:38.621 15:30:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:38.621 15:30:39 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:38.621 15:30:39 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:38.621 15:30:39 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:38.621 15:30:39 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:38.621 15:30:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.621 15:30:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.621 15:30:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.621 15:30:39 -- paths/export.sh@5 -- # export PATH 00:07:38.621 15:30:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.621 15:30:39 -- nvmf/common.sh@47 -- # : 0 00:07:38.621 15:30:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:38.621 15:30:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:38.621 15:30:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:38.621 15:30:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:38.621 15:30:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:38.621 15:30:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:38.621 15:30:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:38.621 15:30:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:38.621 15:30:39 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:38.621 15:30:39 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:38.621 15:30:39 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:38.622 15:30:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:38.622 15:30:39 -- common/autotest_common.sh@10 -- # set +x 00:07:38.622 15:30:39 -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:07:38.622 15:30:39 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:38.622 15:30:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:38.622 15:30:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:38.622 15:30:39 -- common/autotest_common.sh@10 -- # set +x 00:07:38.902 ************************************ 00:07:38.902 START TEST nvmf_host_management 00:07:38.902 ************************************ 00:07:38.902 15:30:40 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:38.902 * Looking for test storage... 00:07:38.902 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:38.902 15:30:40 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:38.902 15:30:40 -- nvmf/common.sh@7 -- # uname -s 00:07:38.902 15:30:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:38.902 15:30:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:38.902 15:30:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:38.902 15:30:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:38.902 15:30:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:38.902 15:30:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:38.902 15:30:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:38.902 15:30:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:38.902 15:30:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:38.902 15:30:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:38.902 15:30:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02dfa913-00e4-4a25-ab2c-855f7283d4db 00:07:38.902 15:30:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=02dfa913-00e4-4a25-ab2c-855f7283d4db 00:07:38.903 15:30:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:38.903 15:30:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:38.903 15:30:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:38.903 15:30:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:38.903 15:30:40 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:38.903 15:30:40 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:38.903 15:30:40 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:38.903 15:30:40 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:38.903 15:30:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.903 15:30:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.903 15:30:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.903 15:30:40 -- paths/export.sh@5 -- # export PATH 00:07:38.903 15:30:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.903 15:30:40 -- nvmf/common.sh@47 -- # : 0 00:07:38.903 15:30:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:38.903 15:30:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:38.903 15:30:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:38.903 15:30:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:38.903 15:30:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:38.903 15:30:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:38.903 15:30:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:38.903 15:30:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:38.903 15:30:40 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:38.903 15:30:40 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:38.903 15:30:40 -- target/host_management.sh@104 -- # nvmftestinit 00:07:38.903 15:30:40 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:38.903 15:30:40 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:38.903 15:30:40 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:38.903 15:30:40 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:38.903 15:30:40 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:38.903 15:30:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:38.903 15:30:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:38.903 15:30:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:38.903 15:30:40 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:07:38.903 15:30:40 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:07:38.903 15:30:40 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:07:38.903 15:30:40 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:07:38.903 15:30:40 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:07:38.903 15:30:40 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:07:38.903 15:30:40 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:38.903 15:30:40 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:38.903 15:30:40 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:38.903 15:30:40 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:38.903 15:30:40 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:38.903 15:30:40 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:38.903 15:30:40 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:38.903 15:30:40 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:38.903 15:30:40 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:38.903 15:30:40 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:38.903 15:30:40 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:38.903 15:30:40 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:38.903 15:30:40 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:38.903 Cannot find device "nvmf_init_br" 00:07:38.903 15:30:40 -- nvmf/common.sh@154 -- # true 00:07:38.903 15:30:40 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:38.903 Cannot find device "nvmf_tgt_br" 00:07:38.903 15:30:40 -- nvmf/common.sh@155 -- # true 00:07:38.903 15:30:40 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:38.903 Cannot find device "nvmf_tgt_br2" 00:07:38.903 15:30:40 -- nvmf/common.sh@156 -- # true 00:07:38.903 15:30:40 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:38.903 Cannot find device "nvmf_init_br" 00:07:38.903 15:30:40 -- nvmf/common.sh@157 -- # true 00:07:38.903 15:30:40 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:38.903 Cannot find device "nvmf_tgt_br" 00:07:38.903 15:30:40 -- nvmf/common.sh@158 -- # true 00:07:38.903 15:30:40 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:38.903 Cannot find device "nvmf_tgt_br2" 00:07:38.903 15:30:40 -- nvmf/common.sh@159 -- # true 00:07:38.903 15:30:40 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:38.903 Cannot find device "nvmf_br" 00:07:38.903 15:30:40 -- nvmf/common.sh@160 -- # true 00:07:38.903 15:30:40 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:38.903 Cannot find device "nvmf_init_if" 00:07:38.903 15:30:40 -- nvmf/common.sh@161 -- # true 00:07:38.903 15:30:40 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:38.903 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:38.903 15:30:40 -- nvmf/common.sh@162 -- # true 00:07:38.903 15:30:40 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:38.903 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:38.903 15:30:40 -- nvmf/common.sh@163 -- # true 00:07:38.903 15:30:40 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:38.903 15:30:40 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:38.903 15:30:40 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:38.903 15:30:40 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:38.903 15:30:40 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:39.161 15:30:40 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:39.161 15:30:40 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:39.161 15:30:40 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:39.161 15:30:40 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:39.161 15:30:40 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:39.161 15:30:40 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:39.161 15:30:40 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:39.161 15:30:40 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:39.161 15:30:40 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:39.161 15:30:40 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:39.161 15:30:40 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:39.161 15:30:40 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:39.161 15:30:40 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:39.161 15:30:40 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:39.161 15:30:40 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:39.161 15:30:40 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:39.161 15:30:40 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:39.161 15:30:40 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:39.161 15:30:40 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:39.161 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:39.161 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:07:39.161 00:07:39.161 --- 10.0.0.2 ping statistics --- 00:07:39.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.161 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:07:39.161 15:30:40 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:39.161 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:39.161 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:07:39.161 00:07:39.161 --- 10.0.0.3 ping statistics --- 00:07:39.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.161 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:07:39.161 15:30:40 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:39.161 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:39.161 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:07:39.161 00:07:39.161 --- 10.0.0.1 ping statistics --- 00:07:39.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.161 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:07:39.161 15:30:40 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:39.161 15:30:40 -- nvmf/common.sh@422 -- # return 0 00:07:39.161 15:30:40 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:39.161 15:30:40 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:39.161 15:30:40 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:39.162 15:30:40 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:39.162 15:30:40 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:39.162 15:30:40 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:39.162 15:30:40 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:39.420 15:30:40 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:07:39.420 15:30:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:39.420 15:30:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:39.420 15:30:40 -- common/autotest_common.sh@10 -- # set +x 00:07:39.420 ************************************ 00:07:39.420 START TEST nvmf_host_management 00:07:39.420 ************************************ 00:07:39.420 15:30:40 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:07:39.420 15:30:40 -- target/host_management.sh@69 -- # starttarget 00:07:39.420 15:30:40 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:39.420 15:30:40 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:39.420 15:30:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:39.420 15:30:40 -- common/autotest_common.sh@10 -- # set +x 00:07:39.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.420 15:30:40 -- nvmf/common.sh@470 -- # nvmfpid=65094 00:07:39.420 15:30:40 -- nvmf/common.sh@471 -- # waitforlisten 65094 00:07:39.420 15:30:40 -- common/autotest_common.sh@817 -- # '[' -z 65094 ']' 00:07:39.420 15:30:40 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:39.420 15:30:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.420 15:30:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:39.420 15:30:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.420 15:30:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:39.420 15:30:40 -- common/autotest_common.sh@10 -- # set +x 00:07:39.420 [2024-04-17 15:30:40.761462] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:07:39.420 [2024-04-17 15:30:40.762238] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:39.678 [2024-04-17 15:30:40.906032] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:39.678 [2024-04-17 15:30:41.064020] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:39.678 [2024-04-17 15:30:41.064345] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:39.678 [2024-04-17 15:30:41.064504] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:39.678 [2024-04-17 15:30:41.064557] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:39.678 [2024-04-17 15:30:41.064654] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:39.678 [2024-04-17 15:30:41.065129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.678 [2024-04-17 15:30:41.065331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:39.678 [2024-04-17 15:30:41.065487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:39.678 [2024-04-17 15:30:41.065492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.614 15:30:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:40.614 15:30:41 -- common/autotest_common.sh@850 -- # return 0 00:07:40.614 15:30:41 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:40.614 15:30:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:40.614 15:30:41 -- common/autotest_common.sh@10 -- # set +x 00:07:40.614 15:30:41 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:40.614 15:30:41 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:40.614 15:30:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.614 15:30:41 -- common/autotest_common.sh@10 -- # set +x 00:07:40.614 [2024-04-17 15:30:41.812789] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:40.614 15:30:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.614 15:30:41 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:40.614 15:30:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:40.614 15:30:41 -- common/autotest_common.sh@10 -- # set +x 00:07:40.614 15:30:41 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:40.614 15:30:41 -- target/host_management.sh@23 -- # cat 00:07:40.614 15:30:41 -- target/host_management.sh@30 -- # rpc_cmd 00:07:40.614 15:30:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.614 15:30:41 -- common/autotest_common.sh@10 -- # set +x 00:07:40.614 Malloc0 00:07:40.614 [2024-04-17 15:30:41.898893] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:40.614 15:30:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.614 15:30:41 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:40.614 15:30:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:40.614 15:30:41 -- common/autotest_common.sh@10 -- # set +x 00:07:40.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:40.614 15:30:41 -- target/host_management.sh@73 -- # perfpid=65152 00:07:40.614 15:30:41 -- target/host_management.sh@74 -- # waitforlisten 65152 /var/tmp/bdevperf.sock 00:07:40.614 15:30:41 -- common/autotest_common.sh@817 -- # '[' -z 65152 ']' 00:07:40.614 15:30:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:40.614 15:30:41 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:40.614 15:30:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:40.614 15:30:41 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:40.614 15:30:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:40.614 15:30:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:40.614 15:30:41 -- nvmf/common.sh@521 -- # config=() 00:07:40.614 15:30:41 -- common/autotest_common.sh@10 -- # set +x 00:07:40.614 15:30:41 -- nvmf/common.sh@521 -- # local subsystem config 00:07:40.614 15:30:41 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:07:40.614 15:30:41 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:07:40.614 { 00:07:40.614 "params": { 00:07:40.614 "name": "Nvme$subsystem", 00:07:40.614 "trtype": "$TEST_TRANSPORT", 00:07:40.614 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:40.614 "adrfam": "ipv4", 00:07:40.614 "trsvcid": "$NVMF_PORT", 00:07:40.614 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:40.614 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:40.614 "hdgst": ${hdgst:-false}, 00:07:40.614 "ddgst": ${ddgst:-false} 00:07:40.614 }, 00:07:40.614 "method": "bdev_nvme_attach_controller" 00:07:40.614 } 00:07:40.614 EOF 00:07:40.614 )") 00:07:40.614 15:30:41 -- nvmf/common.sh@543 -- # cat 00:07:40.614 15:30:41 -- nvmf/common.sh@545 -- # jq . 00:07:40.614 15:30:41 -- nvmf/common.sh@546 -- # IFS=, 00:07:40.614 15:30:41 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:07:40.614 "params": { 00:07:40.614 "name": "Nvme0", 00:07:40.614 "trtype": "tcp", 00:07:40.614 "traddr": "10.0.0.2", 00:07:40.614 "adrfam": "ipv4", 00:07:40.614 "trsvcid": "4420", 00:07:40.614 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:40.614 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:40.614 "hdgst": false, 00:07:40.614 "ddgst": false 00:07:40.614 }, 00:07:40.614 "method": "bdev_nvme_attach_controller" 00:07:40.614 }' 00:07:40.614 [2024-04-17 15:30:42.004079] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:07:40.614 [2024-04-17 15:30:42.004177] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65152 ] 00:07:40.873 [2024-04-17 15:30:42.149201] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.873 [2024-04-17 15:30:42.302088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.133 Running I/O for 10 seconds... 00:07:41.708 15:30:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:41.708 15:30:42 -- common/autotest_common.sh@850 -- # return 0 00:07:41.708 15:30:42 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:41.708 15:30:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:41.708 15:30:42 -- common/autotest_common.sh@10 -- # set +x 00:07:41.708 15:30:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:41.708 15:30:42 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:41.708 15:30:42 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:41.708 15:30:42 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:41.708 15:30:42 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:41.708 15:30:42 -- target/host_management.sh@52 -- # local ret=1 00:07:41.708 15:30:42 -- target/host_management.sh@53 -- # local i 00:07:41.708 15:30:42 -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:41.708 15:30:42 -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:41.708 15:30:42 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:41.708 15:30:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:41.708 15:30:42 -- common/autotest_common.sh@10 -- # set +x 00:07:41.708 15:30:42 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:41.708 15:30:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:41.708 15:30:43 -- target/host_management.sh@55 -- # read_io_count=579 00:07:41.708 15:30:43 -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:07:41.708 15:30:43 -- target/host_management.sh@59 -- # ret=0 00:07:41.708 15:30:43 -- target/host_management.sh@60 -- # break 00:07:41.708 15:30:43 -- target/host_management.sh@64 -- # return 0 00:07:41.708 15:30:43 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:41.708 15:30:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:41.708 15:30:43 -- common/autotest_common.sh@10 -- # set +x 00:07:41.708 [2024-04-17 15:30:43.020848] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.708 [2024-04-17 15:30:43.022570] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.708 [2024-04-17 15:30:43.022604] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.708 [2024-04-17 15:30:43.022614] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.708 [2024-04-17 15:30:43.022623] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.708 [2024-04-17 15:30:43.022632] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.708 [2024-04-17 15:30:43.022642] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.708 [2024-04-17 15:30:43.022651] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.708 [2024-04-17 15:30:43.022659] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.708 [2024-04-17 15:30:43.022668] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.708 [2024-04-17 15:30:43.022678] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.708 [2024-04-17 15:30:43.022686] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.708 [2024-04-17 15:30:43.022695] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.708 [2024-04-17 15:30:43.022704] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.708 [2024-04-17 15:30:43.022712] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.708 [2024-04-17 15:30:43.022721] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.708 [2024-04-17 15:30:43.022729] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.708 [2024-04-17 15:30:43.022738] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.708 [2024-04-17 15:30:43.022746] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.709 [2024-04-17 15:30:43.022767] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.709 [2024-04-17 15:30:43.022777] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.709 [2024-04-17 15:30:43.022786] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.709 [2024-04-17 15:30:43.022795] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.709 [2024-04-17 15:30:43.022804] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.709 [2024-04-17 15:30:43.022812] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.709 [2024-04-17 15:30:43.022821] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.709 [2024-04-17 15:30:43.022829] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.709 [2024-04-17 15:30:43.022845] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.709 [2024-04-17 15:30:43.022865] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.709 [2024-04-17 15:30:43.022875] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.709 [2024-04-17 15:30:43.022884] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.709 [2024-04-17 15:30:43.022893] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.709 [2024-04-17 15:30:43.022913] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.709 [2024-04-17 15:30:43.022924] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.709 [2024-04-17 15:30:43.022933] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.709 [2024-04-17 15:30:43.022942] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.709 [2024-04-17 15:30:43.022950] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.709 [2024-04-17 15:30:43.022960] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.709 [2024-04-17 15:30:43.022968] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.709 [2024-04-17 15:30:43.022977] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.709 [2024-04-17 15:30:43.022985] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.709 [2024-04-17 15:30:43.022994] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.709 [2024-04-17 15:30:43.023002] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.709 [2024-04-17 15:30:43.023010] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.709 [2024-04-17 15:30:43.023018] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.709 [2024-04-17 15:30:43.023026] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.709 [2024-04-17 15:30:43.023035] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.709 [2024-04-17 15:30:43.023043] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.709 [2024-04-17 15:30:43.023051] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.709 [2024-04-17 15:30:43.023060] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.709 [2024-04-17 15:30:43.023068] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.709 [2024-04-17 15:30:43.023076] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.709 [2024-04-17 15:30:43.023100] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.709 [2024-04-17 15:30:43.023108] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.709 [2024-04-17 15:30:43.023125] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.709 [2024-04-17 15:30:43.023134] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.709 [2024-04-17 15:30:43.023158] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.709 [2024-04-17 15:30:43.023166] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.709 [2024-04-17 15:30:43.023173] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.709 [2024-04-17 15:30:43.023182] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.709 [2024-04-17 15:30:43.023190] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.709 [2024-04-17 15:30:43.023198] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.709 [2024-04-17 15:30:43.023206] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8640 is same with the state(5) to be set 00:07:41.709 [2024-04-17 15:30:43.023342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.709 [2024-04-17 15:30:43.023374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.709 [2024-04-17 15:30:43.023402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.709 [2024-04-17 15:30:43.023413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.709 [2024-04-17 15:30:43.023426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.709 [2024-04-17 15:30:43.023436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.709 [2024-04-17 15:30:43.023448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.709 [2024-04-17 15:30:43.023458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.709 [2024-04-17 15:30:43.023470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.709 [2024-04-17 15:30:43.023479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.709 [2024-04-17 15:30:43.023491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.709 [2024-04-17 15:30:43.023501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.709 [2024-04-17 15:30:43.023512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.709 [2024-04-17 15:30:43.023522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.709 [2024-04-17 15:30:43.023533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.709 [2024-04-17 15:30:43.023550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.709 [2024-04-17 15:30:43.023562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.709 [2024-04-17 15:30:43.023571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.709 [2024-04-17 15:30:43.023597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.709 [2024-04-17 15:30:43.023607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.709 [2024-04-17 15:30:43.023618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.709 [2024-04-17 15:30:43.023628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.709 [2024-04-17 15:30:43.023640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.709 [2024-04-17 15:30:43.023649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.709 [2024-04-17 15:30:43.023661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.709 [2024-04-17 15:30:43.023670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.709 [2024-04-17 15:30:43.023681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.709 [2024-04-17 15:30:43.023691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.709 [2024-04-17 15:30:43.023702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.709 [2024-04-17 15:30:43.023712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.709 [2024-04-17 15:30:43.023723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.709 [2024-04-17 15:30:43.023732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.709 [2024-04-17 15:30:43.023754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.709 [2024-04-17 15:30:43.023780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.709 [2024-04-17 15:30:43.023832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.709 [2024-04-17 15:30:43.023845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.709 [2024-04-17 15:30:43.023858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.709 [2024-04-17 15:30:43.023868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.709 [2024-04-17 15:30:43.023879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.710 [2024-04-17 15:30:43.023889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.710 [2024-04-17 15:30:43.023901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.710 [2024-04-17 15:30:43.023911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.710 [2024-04-17 15:30:43.023923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.710 [2024-04-17 15:30:43.023933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.710 [2024-04-17 15:30:43.023945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.710 [2024-04-17 15:30:43.023964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.710 [2024-04-17 15:30:43.023977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.710 [2024-04-17 15:30:43.024002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.710 [2024-04-17 15:30:43.024014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.710 [2024-04-17 15:30:43.024023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.710 [2024-04-17 15:30:43.024035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.710 [2024-04-17 15:30:43.024045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.710 [2024-04-17 15:30:43.024057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.710 [2024-04-17 15:30:43.024066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.710 [2024-04-17 15:30:43.024078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.710 [2024-04-17 15:30:43.024088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.710 [2024-04-17 15:30:43.024100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.710 [2024-04-17 15:30:43.024109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.710 [2024-04-17 15:30:43.024121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.710 [2024-04-17 15:30:43.024131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.710 [2024-04-17 15:30:43.024143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.710 [2024-04-17 15:30:43.024153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.710 [2024-04-17 15:30:43.024165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.710 [2024-04-17 15:30:43.024174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.710 [2024-04-17 15:30:43.024186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.710 [2024-04-17 15:30:43.024203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.710 [2024-04-17 15:30:43.024215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.710 [2024-04-17 15:30:43.024226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.710 [2024-04-17 15:30:43.024249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.710 [2024-04-17 15:30:43.024258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.710 [2024-04-17 15:30:43.024270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.710 [2024-04-17 15:30:43.024279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.710 [2024-04-17 15:30:43.024291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.710 [2024-04-17 15:30:43.024301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.710 [2024-04-17 15:30:43.024312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.710 [2024-04-17 15:30:43.024322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.710 [2024-04-17 15:30:43.024334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.710 [2024-04-17 15:30:43.024343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.710 [2024-04-17 15:30:43.024363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.710 [2024-04-17 15:30:43.024379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.710 [2024-04-17 15:30:43.024401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.710 [2024-04-17 15:30:43.024411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.710 [2024-04-17 15:30:43.024423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.710 [2024-04-17 15:30:43.024432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.710 [2024-04-17 15:30:43.024444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.710 [2024-04-17 15:30:43.024454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.710 [2024-04-17 15:30:43.024465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.710 [2024-04-17 15:30:43.024475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.710 [2024-04-17 15:30:43.024486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.710 [2024-04-17 15:30:43.024496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.710 [2024-04-17 15:30:43.024508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.710 [2024-04-17 15:30:43.024518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.710 [2024-04-17 15:30:43.024530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.710 [2024-04-17 15:30:43.024539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.710 [2024-04-17 15:30:43.024551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.710 [2024-04-17 15:30:43.024560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.710 [2024-04-17 15:30:43.024572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.710 [2024-04-17 15:30:43.024586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.710 [2024-04-17 15:30:43.024610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.710 [2024-04-17 15:30:43.024619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.710 [2024-04-17 15:30:43.024631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.710 [2024-04-17 15:30:43.024641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.710 [2024-04-17 15:30:43.024652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.710 [2024-04-17 15:30:43.024662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.710 [2024-04-17 15:30:43.024674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.710 [2024-04-17 15:30:43.024684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.710 [2024-04-17 15:30:43.024696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.710 [2024-04-17 15:30:43.024706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.710 [2024-04-17 15:30:43.024717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.710 [2024-04-17 15:30:43.024727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.710 [2024-04-17 15:30:43.024738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.710 [2024-04-17 15:30:43.024776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.710 [2024-04-17 15:30:43.024791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.710 [2024-04-17 15:30:43.024801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.710 [2024-04-17 15:30:43.024813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.710 [2024-04-17 15:30:43.024835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.710 [2024-04-17 15:30:43.024847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.710 [2024-04-17 15:30:43.024857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.711 [2024-04-17 15:30:43.024869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.711 [2024-04-17 15:30:43.024879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.711 [2024-04-17 15:30:43.024890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.711 [2024-04-17 15:30:43.024900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.711 [2024-04-17 15:30:43.024911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.711 [2024-04-17 15:30:43.024921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.711 [2024-04-17 15:30:43.024933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.711 [2024-04-17 15:30:43.024942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.711 [2024-04-17 15:30:43.024954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.711 [2024-04-17 15:30:43.024963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.711 [2024-04-17 15:30:43.024983] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4d0940 is same with the state(5) to be set 00:07:41.711 [2024-04-17 15:30:43.025097] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x4d0940 was disconnected and freed. reset controller. 00:07:41.711 15:30:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:41.711 15:30:43 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:41.711 15:30:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:41.711 15:30:43 -- common/autotest_common.sh@10 -- # set +x 00:07:41.711 [2024-04-17 15:30:43.026381] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:41.711 task offset: 81920 on job bdev=Nvme0n1 fails 00:07:41.711 00:07:41.711 Latency(us) 00:07:41.711 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:41.711 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:41.711 Job: Nvme0n1 ended in about 0.50 seconds with error 00:07:41.711 Verification LBA range: start 0x0 length 0x400 00:07:41.711 Nvme0n1 : 0.50 1268.90 79.31 126.89 0.00 44455.18 8281.37 43849.54 00:07:41.711 =================================================================================================================== 00:07:41.711 Total : 1268.90 79.31 126.89 0.00 44455.18 8281.37 43849.54 00:07:41.711 [2024-04-17 15:30:43.029298] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:41.711 [2024-04-17 15:30:43.029334] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4ab1b0 (9): Bad file descriptor 00:07:41.711 15:30:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:41.711 15:30:43 -- target/host_management.sh@87 -- # sleep 1 00:07:41.711 [2024-04-17 15:30:43.042569] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:42.647 15:30:44 -- target/host_management.sh@91 -- # kill -9 65152 00:07:42.647 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (65152) - No such process 00:07:42.647 15:30:44 -- target/host_management.sh@91 -- # true 00:07:42.647 15:30:44 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:42.647 15:30:44 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:42.647 15:30:44 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:42.647 15:30:44 -- nvmf/common.sh@521 -- # config=() 00:07:42.647 15:30:44 -- nvmf/common.sh@521 -- # local subsystem config 00:07:42.647 15:30:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:07:42.647 15:30:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:07:42.647 { 00:07:42.647 "params": { 00:07:42.647 "name": "Nvme$subsystem", 00:07:42.647 "trtype": "$TEST_TRANSPORT", 00:07:42.647 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:42.647 "adrfam": "ipv4", 00:07:42.647 "trsvcid": "$NVMF_PORT", 00:07:42.647 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:42.647 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:42.647 "hdgst": ${hdgst:-false}, 00:07:42.647 "ddgst": ${ddgst:-false} 00:07:42.647 }, 00:07:42.647 "method": "bdev_nvme_attach_controller" 00:07:42.647 } 00:07:42.647 EOF 00:07:42.647 )") 00:07:42.647 15:30:44 -- nvmf/common.sh@543 -- # cat 00:07:42.647 15:30:44 -- nvmf/common.sh@545 -- # jq . 00:07:42.647 15:30:44 -- nvmf/common.sh@546 -- # IFS=, 00:07:42.647 15:30:44 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:07:42.647 "params": { 00:07:42.647 "name": "Nvme0", 00:07:42.647 "trtype": "tcp", 00:07:42.647 "traddr": "10.0.0.2", 00:07:42.647 "adrfam": "ipv4", 00:07:42.647 "trsvcid": "4420", 00:07:42.647 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:42.647 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:42.647 "hdgst": false, 00:07:42.647 "ddgst": false 00:07:42.647 }, 00:07:42.647 "method": "bdev_nvme_attach_controller" 00:07:42.647 }' 00:07:42.907 [2024-04-17 15:30:44.092615] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:07:42.907 [2024-04-17 15:30:44.092719] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65186 ] 00:07:42.907 [2024-04-17 15:30:44.228955] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.164 [2024-04-17 15:30:44.350946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.164 Running I/O for 1 seconds... 00:07:44.551 00:07:44.551 Latency(us) 00:07:44.551 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:44.551 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:44.551 Verification LBA range: start 0x0 length 0x400 00:07:44.551 Nvme0n1 : 1.03 1559.01 97.44 0.00 0.00 40247.47 4289.63 40274.85 00:07:44.551 =================================================================================================================== 00:07:44.551 Total : 1559.01 97.44 0.00 0.00 40247.47 4289.63 40274.85 00:07:44.551 15:30:45 -- target/host_management.sh@101 -- # stoptarget 00:07:44.552 15:30:45 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:44.552 15:30:45 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:07:44.552 15:30:45 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:44.552 15:30:45 -- target/host_management.sh@40 -- # nvmftestfini 00:07:44.552 15:30:45 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:44.552 15:30:45 -- nvmf/common.sh@117 -- # sync 00:07:44.822 15:30:46 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:44.822 15:30:46 -- nvmf/common.sh@120 -- # set +e 00:07:44.823 15:30:46 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:44.823 15:30:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:44.823 rmmod nvme_tcp 00:07:44.823 rmmod nvme_fabrics 00:07:44.823 rmmod nvme_keyring 00:07:44.823 15:30:46 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:44.823 15:30:46 -- nvmf/common.sh@124 -- # set -e 00:07:44.823 15:30:46 -- nvmf/common.sh@125 -- # return 0 00:07:44.823 15:30:46 -- nvmf/common.sh@478 -- # '[' -n 65094 ']' 00:07:44.823 15:30:46 -- nvmf/common.sh@479 -- # killprocess 65094 00:07:44.823 15:30:46 -- common/autotest_common.sh@936 -- # '[' -z 65094 ']' 00:07:44.823 15:30:46 -- common/autotest_common.sh@940 -- # kill -0 65094 00:07:44.823 15:30:46 -- common/autotest_common.sh@941 -- # uname 00:07:44.823 15:30:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:44.823 15:30:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65094 00:07:44.823 killing process with pid 65094 00:07:44.823 15:30:46 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:07:44.823 15:30:46 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:07:44.823 15:30:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65094' 00:07:44.823 15:30:46 -- common/autotest_common.sh@955 -- # kill 65094 00:07:44.823 15:30:46 -- common/autotest_common.sh@960 -- # wait 65094 00:07:45.081 [2024-04-17 15:30:46.425804] app.c: 628:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:45.081 15:30:46 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:45.081 15:30:46 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:45.081 15:30:46 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:45.081 15:30:46 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:45.081 15:30:46 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:45.081 15:30:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.081 15:30:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:45.081 15:30:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.081 15:30:46 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:45.081 00:07:45.081 real 0m5.794s 00:07:45.081 user 0m24.052s 00:07:45.081 sys 0m1.442s 00:07:45.081 15:30:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:45.081 15:30:46 -- common/autotest_common.sh@10 -- # set +x 00:07:45.081 ************************************ 00:07:45.081 END TEST nvmf_host_management 00:07:45.081 ************************************ 00:07:45.340 15:30:46 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:07:45.340 00:07:45.340 ************************************ 00:07:45.340 END TEST nvmf_host_management 00:07:45.340 ************************************ 00:07:45.340 real 0m6.465s 00:07:45.340 user 0m24.204s 00:07:45.340 sys 0m1.734s 00:07:45.340 15:30:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:45.340 15:30:46 -- common/autotest_common.sh@10 -- # set +x 00:07:45.340 15:30:46 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:45.340 15:30:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:45.340 15:30:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:45.340 15:30:46 -- common/autotest_common.sh@10 -- # set +x 00:07:45.340 ************************************ 00:07:45.340 START TEST nvmf_lvol 00:07:45.340 ************************************ 00:07:45.340 15:30:46 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:45.340 * Looking for test storage... 00:07:45.340 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:45.340 15:30:46 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:45.340 15:30:46 -- nvmf/common.sh@7 -- # uname -s 00:07:45.340 15:30:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.340 15:30:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.340 15:30:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.340 15:30:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.340 15:30:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.340 15:30:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.340 15:30:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.340 15:30:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.340 15:30:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.340 15:30:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.340 15:30:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02dfa913-00e4-4a25-ab2c-855f7283d4db 00:07:45.340 15:30:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=02dfa913-00e4-4a25-ab2c-855f7283d4db 00:07:45.340 15:30:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.340 15:30:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.340 15:30:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:45.340 15:30:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:45.340 15:30:46 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:45.340 15:30:46 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.340 15:30:46 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.340 15:30:46 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.340 15:30:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.340 15:30:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.340 15:30:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.340 15:30:46 -- paths/export.sh@5 -- # export PATH 00:07:45.340 15:30:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.340 15:30:46 -- nvmf/common.sh@47 -- # : 0 00:07:45.340 15:30:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:45.340 15:30:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:45.340 15:30:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:45.340 15:30:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.340 15:30:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.340 15:30:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:45.340 15:30:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:45.340 15:30:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:45.340 15:30:46 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:45.340 15:30:46 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:45.340 15:30:46 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:45.340 15:30:46 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:45.340 15:30:46 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:45.340 15:30:46 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:45.340 15:30:46 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:45.340 15:30:46 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:45.340 15:30:46 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:45.340 15:30:46 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:45.340 15:30:46 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:45.340 15:30:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.340 15:30:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:45.340 15:30:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.340 15:30:46 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:07:45.340 15:30:46 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:07:45.340 15:30:46 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:07:45.340 15:30:46 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:07:45.340 15:30:46 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:07:45.340 15:30:46 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:07:45.340 15:30:46 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:45.340 15:30:46 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:45.340 15:30:46 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:45.340 15:30:46 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:45.340 15:30:46 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:45.340 15:30:46 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:45.340 15:30:46 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:45.340 15:30:46 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:45.340 15:30:46 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:45.340 15:30:46 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:45.340 15:30:46 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:45.340 15:30:46 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:45.340 15:30:46 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:45.340 15:30:46 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:45.340 Cannot find device "nvmf_tgt_br" 00:07:45.600 15:30:46 -- nvmf/common.sh@155 -- # true 00:07:45.600 15:30:46 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:45.600 Cannot find device "nvmf_tgt_br2" 00:07:45.600 15:30:46 -- nvmf/common.sh@156 -- # true 00:07:45.600 15:30:46 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:45.600 15:30:46 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:45.600 Cannot find device "nvmf_tgt_br" 00:07:45.600 15:30:46 -- nvmf/common.sh@158 -- # true 00:07:45.600 15:30:46 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:45.600 Cannot find device "nvmf_tgt_br2" 00:07:45.600 15:30:46 -- nvmf/common.sh@159 -- # true 00:07:45.600 15:30:46 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:45.600 15:30:46 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:45.600 15:30:46 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:45.600 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:45.600 15:30:46 -- nvmf/common.sh@162 -- # true 00:07:45.600 15:30:46 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:45.600 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:45.600 15:30:46 -- nvmf/common.sh@163 -- # true 00:07:45.600 15:30:46 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:45.600 15:30:46 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:45.600 15:30:46 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:45.600 15:30:46 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:45.600 15:30:46 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:45.600 15:30:46 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:45.600 15:30:46 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:45.600 15:30:46 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:45.600 15:30:46 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:45.600 15:30:46 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:45.600 15:30:46 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:45.600 15:30:46 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:45.600 15:30:46 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:45.600 15:30:46 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:45.600 15:30:46 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:45.600 15:30:46 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:45.600 15:30:47 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:45.600 15:30:47 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:45.600 15:30:47 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:45.600 15:30:47 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:45.859 15:30:47 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:45.859 15:30:47 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:45.859 15:30:47 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:45.859 15:30:47 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:45.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:45.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:07:45.859 00:07:45.859 --- 10.0.0.2 ping statistics --- 00:07:45.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.859 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:07:45.859 15:30:47 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:45.859 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:45.859 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:07:45.859 00:07:45.859 --- 10.0.0.3 ping statistics --- 00:07:45.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.859 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:07:45.859 15:30:47 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:45.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:45.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:07:45.859 00:07:45.859 --- 10.0.0.1 ping statistics --- 00:07:45.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.859 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:07:45.859 15:30:47 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:45.859 15:30:47 -- nvmf/common.sh@422 -- # return 0 00:07:45.859 15:30:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:45.859 15:30:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:45.859 15:30:47 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:45.859 15:30:47 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:45.859 15:30:47 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:45.859 15:30:47 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:45.859 15:30:47 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:45.859 15:30:47 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:45.859 15:30:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:45.859 15:30:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:45.859 15:30:47 -- common/autotest_common.sh@10 -- # set +x 00:07:45.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.859 15:30:47 -- nvmf/common.sh@470 -- # nvmfpid=65426 00:07:45.859 15:30:47 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:45.859 15:30:47 -- nvmf/common.sh@471 -- # waitforlisten 65426 00:07:45.859 15:30:47 -- common/autotest_common.sh@817 -- # '[' -z 65426 ']' 00:07:45.859 15:30:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.859 15:30:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:45.859 15:30:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.859 15:30:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:45.859 15:30:47 -- common/autotest_common.sh@10 -- # set +x 00:07:45.859 [2024-04-17 15:30:47.156218] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:07:45.859 [2024-04-17 15:30:47.156337] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:45.859 [2024-04-17 15:30:47.287555] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:46.117 [2024-04-17 15:30:47.413141] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:46.117 [2024-04-17 15:30:47.413383] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:46.117 [2024-04-17 15:30:47.413524] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:46.117 [2024-04-17 15:30:47.413641] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:46.117 [2024-04-17 15:30:47.413672] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:46.117 [2024-04-17 15:30:47.413922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.117 [2024-04-17 15:30:47.414099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:46.117 [2024-04-17 15:30:47.414102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.684 15:30:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:46.684 15:30:48 -- common/autotest_common.sh@850 -- # return 0 00:07:46.684 15:30:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:46.684 15:30:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:46.684 15:30:48 -- common/autotest_common.sh@10 -- # set +x 00:07:46.684 15:30:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:46.684 15:30:48 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:46.942 [2024-04-17 15:30:48.332194] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:46.942 15:30:48 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:47.508 15:30:48 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:47.508 15:30:48 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:47.766 15:30:49 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:47.766 15:30:49 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:48.024 15:30:49 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:48.282 15:30:49 -- target/nvmf_lvol.sh@29 -- # lvs=6ba4d979-b13b-4cb4-968b-9cda604d243e 00:07:48.282 15:30:49 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6ba4d979-b13b-4cb4-968b-9cda604d243e lvol 20 00:07:48.540 15:30:49 -- target/nvmf_lvol.sh@32 -- # lvol=421d2d2d-620e-4ea2-bc7d-20a4f589abc8 00:07:48.540 15:30:49 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:48.798 15:30:50 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 421d2d2d-620e-4ea2-bc7d-20a4f589abc8 00:07:48.798 15:30:50 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:49.056 [2024-04-17 15:30:50.464021] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:49.056 15:30:50 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:49.313 15:30:50 -- target/nvmf_lvol.sh@42 -- # perf_pid=65507 00:07:49.313 15:30:50 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:49.313 15:30:50 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:50.686 15:30:51 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 421d2d2d-620e-4ea2-bc7d-20a4f589abc8 MY_SNAPSHOT 00:07:50.686 15:30:52 -- target/nvmf_lvol.sh@47 -- # snapshot=066ddc68-b4a2-426b-aff5-52649408d798 00:07:50.686 15:30:52 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 421d2d2d-620e-4ea2-bc7d-20a4f589abc8 30 00:07:50.986 15:30:52 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 066ddc68-b4a2-426b-aff5-52649408d798 MY_CLONE 00:07:51.271 15:30:52 -- target/nvmf_lvol.sh@49 -- # clone=705902ce-1691-4322-b7db-6f42a9ebb37c 00:07:51.271 15:30:52 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 705902ce-1691-4322-b7db-6f42a9ebb37c 00:07:51.837 15:30:52 -- target/nvmf_lvol.sh@53 -- # wait 65507 00:07:59.947 Initializing NVMe Controllers 00:07:59.947 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:59.947 Controller IO queue size 128, less than required. 00:07:59.948 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:59.948 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:59.948 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:59.948 Initialization complete. Launching workers. 00:07:59.948 ======================================================== 00:07:59.948 Latency(us) 00:07:59.948 Device Information : IOPS MiB/s Average min max 00:07:59.948 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9754.59 38.10 13127.50 2129.51 74699.57 00:07:59.948 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9885.19 38.61 12954.84 577.04 88086.90 00:07:59.948 ======================================================== 00:07:59.948 Total : 19639.79 76.72 13040.59 577.04 88086.90 00:07:59.948 00:07:59.948 15:31:01 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:59.948 15:31:01 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 421d2d2d-620e-4ea2-bc7d-20a4f589abc8 00:08:00.205 15:31:01 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6ba4d979-b13b-4cb4-968b-9cda604d243e 00:08:00.464 15:31:01 -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:00.464 15:31:01 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:00.464 15:31:01 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:00.464 15:31:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:00.464 15:31:01 -- nvmf/common.sh@117 -- # sync 00:08:00.464 15:31:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:00.464 15:31:01 -- nvmf/common.sh@120 -- # set +e 00:08:00.464 15:31:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:00.464 15:31:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:00.464 rmmod nvme_tcp 00:08:00.464 rmmod nvme_fabrics 00:08:00.464 rmmod nvme_keyring 00:08:00.464 15:31:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:00.722 15:31:01 -- nvmf/common.sh@124 -- # set -e 00:08:00.722 15:31:01 -- nvmf/common.sh@125 -- # return 0 00:08:00.722 15:31:01 -- nvmf/common.sh@478 -- # '[' -n 65426 ']' 00:08:00.722 15:31:01 -- nvmf/common.sh@479 -- # killprocess 65426 00:08:00.722 15:31:01 -- common/autotest_common.sh@936 -- # '[' -z 65426 ']' 00:08:00.722 15:31:01 -- common/autotest_common.sh@940 -- # kill -0 65426 00:08:00.722 15:31:01 -- common/autotest_common.sh@941 -- # uname 00:08:00.722 15:31:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:00.722 15:31:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65426 00:08:00.722 killing process with pid 65426 00:08:00.722 15:31:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:00.722 15:31:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:00.722 15:31:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65426' 00:08:00.722 15:31:01 -- common/autotest_common.sh@955 -- # kill 65426 00:08:00.722 15:31:01 -- common/autotest_common.sh@960 -- # wait 65426 00:08:00.980 15:31:02 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:00.980 15:31:02 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:00.980 15:31:02 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:00.980 15:31:02 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:00.980 15:31:02 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:00.980 15:31:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.980 15:31:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:00.980 15:31:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.980 15:31:02 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:00.980 ************************************ 00:08:00.980 END TEST nvmf_lvol 00:08:00.980 ************************************ 00:08:00.980 00:08:00.980 real 0m15.763s 00:08:00.980 user 1m4.413s 00:08:00.980 sys 0m4.991s 00:08:00.980 15:31:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:00.980 15:31:02 -- common/autotest_common.sh@10 -- # set +x 00:08:01.238 15:31:02 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:01.238 15:31:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:01.238 15:31:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:01.238 15:31:02 -- common/autotest_common.sh@10 -- # set +x 00:08:01.238 ************************************ 00:08:01.238 START TEST nvmf_lvs_grow 00:08:01.238 ************************************ 00:08:01.238 15:31:02 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:01.238 * Looking for test storage... 00:08:01.238 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:01.238 15:31:02 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:01.238 15:31:02 -- nvmf/common.sh@7 -- # uname -s 00:08:01.238 15:31:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:01.238 15:31:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:01.238 15:31:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:01.238 15:31:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:01.238 15:31:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:01.238 15:31:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:01.238 15:31:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:01.238 15:31:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:01.238 15:31:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:01.238 15:31:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:01.238 15:31:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02dfa913-00e4-4a25-ab2c-855f7283d4db 00:08:01.238 15:31:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=02dfa913-00e4-4a25-ab2c-855f7283d4db 00:08:01.238 15:31:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:01.238 15:31:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:01.238 15:31:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:01.238 15:31:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:01.238 15:31:02 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:01.238 15:31:02 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:01.238 15:31:02 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:01.238 15:31:02 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:01.238 15:31:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.238 15:31:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.238 15:31:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.238 15:31:02 -- paths/export.sh@5 -- # export PATH 00:08:01.238 15:31:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.238 15:31:02 -- nvmf/common.sh@47 -- # : 0 00:08:01.238 15:31:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:01.239 15:31:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:01.239 15:31:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:01.239 15:31:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:01.239 15:31:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:01.239 15:31:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:01.239 15:31:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:01.239 15:31:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:01.239 15:31:02 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:01.239 15:31:02 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:01.239 15:31:02 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:08:01.239 15:31:02 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:01.239 15:31:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:01.239 15:31:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:01.239 15:31:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:01.239 15:31:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:01.239 15:31:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.239 15:31:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:01.239 15:31:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.239 15:31:02 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:08:01.239 15:31:02 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:08:01.239 15:31:02 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:08:01.239 15:31:02 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:08:01.239 15:31:02 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:08:01.239 15:31:02 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:08:01.239 15:31:02 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:01.239 15:31:02 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:01.239 15:31:02 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:01.239 15:31:02 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:01.239 15:31:02 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:01.239 15:31:02 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:01.239 15:31:02 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:01.239 15:31:02 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:01.239 15:31:02 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:01.239 15:31:02 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:01.239 15:31:02 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:01.239 15:31:02 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:01.239 15:31:02 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:01.239 15:31:02 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:01.497 Cannot find device "nvmf_tgt_br" 00:08:01.497 15:31:02 -- nvmf/common.sh@155 -- # true 00:08:01.497 15:31:02 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:01.497 Cannot find device "nvmf_tgt_br2" 00:08:01.497 15:31:02 -- nvmf/common.sh@156 -- # true 00:08:01.497 15:31:02 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:01.497 15:31:02 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:01.497 Cannot find device "nvmf_tgt_br" 00:08:01.497 15:31:02 -- nvmf/common.sh@158 -- # true 00:08:01.497 15:31:02 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:01.497 Cannot find device "nvmf_tgt_br2" 00:08:01.497 15:31:02 -- nvmf/common.sh@159 -- # true 00:08:01.497 15:31:02 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:01.497 15:31:02 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:01.497 15:31:02 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:01.497 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:01.497 15:31:02 -- nvmf/common.sh@162 -- # true 00:08:01.497 15:31:02 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:01.497 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:01.497 15:31:02 -- nvmf/common.sh@163 -- # true 00:08:01.497 15:31:02 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:01.497 15:31:02 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:01.497 15:31:02 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:01.497 15:31:02 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:01.497 15:31:02 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:01.497 15:31:02 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:01.497 15:31:02 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:01.497 15:31:02 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:01.497 15:31:02 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:01.497 15:31:02 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:01.497 15:31:02 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:01.497 15:31:02 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:01.497 15:31:02 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:01.497 15:31:02 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:01.497 15:31:02 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:01.497 15:31:02 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:01.497 15:31:02 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:01.497 15:31:02 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:01.497 15:31:02 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:01.754 15:31:02 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:01.754 15:31:02 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:01.754 15:31:02 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:01.754 15:31:02 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:01.754 15:31:02 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:01.754 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:01.754 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:08:01.754 00:08:01.754 --- 10.0.0.2 ping statistics --- 00:08:01.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.754 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:08:01.754 15:31:02 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:01.754 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:01.754 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:08:01.754 00:08:01.754 --- 10.0.0.3 ping statistics --- 00:08:01.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.754 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:08:01.754 15:31:02 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:01.754 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:01.754 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:08:01.754 00:08:01.754 --- 10.0.0.1 ping statistics --- 00:08:01.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.754 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:08:01.754 15:31:02 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:01.754 15:31:02 -- nvmf/common.sh@422 -- # return 0 00:08:01.754 15:31:02 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:01.754 15:31:02 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:01.754 15:31:02 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:01.754 15:31:02 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:01.754 15:31:02 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:01.755 15:31:02 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:01.755 15:31:02 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:01.755 15:31:03 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:08:01.755 15:31:03 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:01.755 15:31:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:01.755 15:31:03 -- common/autotest_common.sh@10 -- # set +x 00:08:01.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.755 15:31:03 -- nvmf/common.sh@470 -- # nvmfpid=65835 00:08:01.755 15:31:03 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:01.755 15:31:03 -- nvmf/common.sh@471 -- # waitforlisten 65835 00:08:01.755 15:31:03 -- common/autotest_common.sh@817 -- # '[' -z 65835 ']' 00:08:01.755 15:31:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.755 15:31:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:01.755 15:31:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.755 15:31:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:01.755 15:31:03 -- common/autotest_common.sh@10 -- # set +x 00:08:01.755 [2024-04-17 15:31:03.084493] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:08:01.755 [2024-04-17 15:31:03.084622] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:02.012 [2024-04-17 15:31:03.223589] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.012 [2024-04-17 15:31:03.363292] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:02.012 [2024-04-17 15:31:03.363357] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:02.012 [2024-04-17 15:31:03.363384] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:02.012 [2024-04-17 15:31:03.363392] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:02.012 [2024-04-17 15:31:03.363399] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:02.012 [2024-04-17 15:31:03.363427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.957 15:31:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:02.957 15:31:04 -- common/autotest_common.sh@850 -- # return 0 00:08:02.957 15:31:04 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:02.957 15:31:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:02.957 15:31:04 -- common/autotest_common.sh@10 -- # set +x 00:08:02.957 15:31:04 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:02.957 15:31:04 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:02.957 [2024-04-17 15:31:04.313256] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:02.957 15:31:04 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:08:02.957 15:31:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:02.957 15:31:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:02.957 15:31:04 -- common/autotest_common.sh@10 -- # set +x 00:08:03.216 ************************************ 00:08:03.216 START TEST lvs_grow_clean 00:08:03.216 ************************************ 00:08:03.216 15:31:04 -- common/autotest_common.sh@1111 -- # lvs_grow 00:08:03.216 15:31:04 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:03.216 15:31:04 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:03.216 15:31:04 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:03.216 15:31:04 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:03.216 15:31:04 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:03.216 15:31:04 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:03.216 15:31:04 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:03.216 15:31:04 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:03.216 15:31:04 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:03.474 15:31:04 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:03.474 15:31:04 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:03.732 15:31:04 -- target/nvmf_lvs_grow.sh@28 -- # lvs=c4f75df9-42cf-4a28-92e7-35361c2c4a8e 00:08:03.732 15:31:04 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4f75df9-42cf-4a28-92e7-35361c2c4a8e 00:08:03.732 15:31:04 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:03.989 15:31:05 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:03.989 15:31:05 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:03.989 15:31:05 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c4f75df9-42cf-4a28-92e7-35361c2c4a8e lvol 150 00:08:04.256 15:31:05 -- target/nvmf_lvs_grow.sh@33 -- # lvol=76fd65e3-fbc8-4808-bf1b-0c4d4b84b946 00:08:04.256 15:31:05 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:04.256 15:31:05 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:04.531 [2024-04-17 15:31:05.748629] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:04.531 [2024-04-17 15:31:05.748762] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:04.531 true 00:08:04.531 15:31:05 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4f75df9-42cf-4a28-92e7-35361c2c4a8e 00:08:04.531 15:31:05 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:04.788 15:31:06 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:04.788 15:31:06 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:05.046 15:31:06 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 76fd65e3-fbc8-4808-bf1b-0c4d4b84b946 00:08:05.304 15:31:06 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:05.304 [2024-04-17 15:31:06.685222] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:05.304 15:31:06 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:05.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:05.562 15:31:06 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=65921 00:08:05.562 15:31:06 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:05.562 15:31:06 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:05.562 15:31:06 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 65921 /var/tmp/bdevperf.sock 00:08:05.562 15:31:06 -- common/autotest_common.sh@817 -- # '[' -z 65921 ']' 00:08:05.562 15:31:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:05.562 15:31:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:05.562 15:31:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:05.562 15:31:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:05.562 15:31:06 -- common/autotest_common.sh@10 -- # set +x 00:08:05.562 [2024-04-17 15:31:06.969203] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:08:05.562 [2024-04-17 15:31:06.969350] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65921 ] 00:08:05.820 [2024-04-17 15:31:07.108838] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.079 [2024-04-17 15:31:07.267551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.645 15:31:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:06.645 15:31:07 -- common/autotest_common.sh@850 -- # return 0 00:08:06.645 15:31:07 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:06.904 Nvme0n1 00:08:06.904 15:31:08 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:07.161 [ 00:08:07.161 { 00:08:07.161 "name": "Nvme0n1", 00:08:07.161 "aliases": [ 00:08:07.161 "76fd65e3-fbc8-4808-bf1b-0c4d4b84b946" 00:08:07.161 ], 00:08:07.161 "product_name": "NVMe disk", 00:08:07.161 "block_size": 4096, 00:08:07.161 "num_blocks": 38912, 00:08:07.161 "uuid": "76fd65e3-fbc8-4808-bf1b-0c4d4b84b946", 00:08:07.161 "assigned_rate_limits": { 00:08:07.161 "rw_ios_per_sec": 0, 00:08:07.161 "rw_mbytes_per_sec": 0, 00:08:07.161 "r_mbytes_per_sec": 0, 00:08:07.161 "w_mbytes_per_sec": 0 00:08:07.161 }, 00:08:07.161 "claimed": false, 00:08:07.161 "zoned": false, 00:08:07.161 "supported_io_types": { 00:08:07.161 "read": true, 00:08:07.161 "write": true, 00:08:07.161 "unmap": true, 00:08:07.161 "write_zeroes": true, 00:08:07.161 "flush": true, 00:08:07.161 "reset": true, 00:08:07.161 "compare": true, 00:08:07.161 "compare_and_write": true, 00:08:07.161 "abort": true, 00:08:07.161 "nvme_admin": true, 00:08:07.161 "nvme_io": true 00:08:07.161 }, 00:08:07.161 "memory_domains": [ 00:08:07.161 { 00:08:07.161 "dma_device_id": "system", 00:08:07.161 "dma_device_type": 1 00:08:07.161 } 00:08:07.161 ], 00:08:07.161 "driver_specific": { 00:08:07.161 "nvme": [ 00:08:07.161 { 00:08:07.161 "trid": { 00:08:07.161 "trtype": "TCP", 00:08:07.161 "adrfam": "IPv4", 00:08:07.161 "traddr": "10.0.0.2", 00:08:07.161 "trsvcid": "4420", 00:08:07.161 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:07.161 }, 00:08:07.162 "ctrlr_data": { 00:08:07.162 "cntlid": 1, 00:08:07.162 "vendor_id": "0x8086", 00:08:07.162 "model_number": "SPDK bdev Controller", 00:08:07.162 "serial_number": "SPDK0", 00:08:07.162 "firmware_revision": "24.05", 00:08:07.162 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:07.162 "oacs": { 00:08:07.162 "security": 0, 00:08:07.162 "format": 0, 00:08:07.162 "firmware": 0, 00:08:07.162 "ns_manage": 0 00:08:07.162 }, 00:08:07.162 "multi_ctrlr": true, 00:08:07.162 "ana_reporting": false 00:08:07.162 }, 00:08:07.162 "vs": { 00:08:07.162 "nvme_version": "1.3" 00:08:07.162 }, 00:08:07.162 "ns_data": { 00:08:07.162 "id": 1, 00:08:07.162 "can_share": true 00:08:07.162 } 00:08:07.162 } 00:08:07.162 ], 00:08:07.162 "mp_policy": "active_passive" 00:08:07.162 } 00:08:07.162 } 00:08:07.162 ] 00:08:07.162 15:31:08 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=65945 00:08:07.162 15:31:08 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:07.162 15:31:08 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:07.162 Running I/O for 10 seconds... 00:08:08.095 Latency(us) 00:08:08.095 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:08.095 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:08.095 Nvme0n1 : 1.00 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:08:08.095 =================================================================================================================== 00:08:08.095 Total : 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:08:08.095 00:08:09.030 15:31:10 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c4f75df9-42cf-4a28-92e7-35361c2c4a8e 00:08:09.288 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:09.288 Nvme0n1 : 2.00 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:08:09.288 =================================================================================================================== 00:08:09.288 Total : 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:08:09.288 00:08:09.288 true 00:08:09.545 15:31:10 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4f75df9-42cf-4a28-92e7-35361c2c4a8e 00:08:09.545 15:31:10 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:09.803 15:31:10 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:09.803 15:31:10 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:09.803 15:31:10 -- target/nvmf_lvs_grow.sh@65 -- # wait 65945 00:08:10.369 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:10.369 Nvme0n1 : 3.00 7196.67 28.11 0.00 0.00 0.00 0.00 0.00 00:08:10.369 =================================================================================================================== 00:08:10.369 Total : 7196.67 28.11 0.00 0.00 0.00 0.00 0.00 00:08:10.369 00:08:11.302 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:11.302 Nvme0n1 : 4.00 7143.75 27.91 0.00 0.00 0.00 0.00 0.00 00:08:11.302 =================================================================================================================== 00:08:11.302 Total : 7143.75 27.91 0.00 0.00 0.00 0.00 0.00 00:08:11.302 00:08:12.235 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:12.235 Nvme0n1 : 5.00 7061.20 27.58 0.00 0.00 0.00 0.00 0.00 00:08:12.235 =================================================================================================================== 00:08:12.235 Total : 7061.20 27.58 0.00 0.00 0.00 0.00 0.00 00:08:12.235 00:08:13.167 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.167 Nvme0n1 : 6.00 7069.67 27.62 0.00 0.00 0.00 0.00 0.00 00:08:13.167 =================================================================================================================== 00:08:13.167 Total : 7069.67 27.62 0.00 0.00 0.00 0.00 0.00 00:08:13.167 00:08:14.118 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.118 Nvme0n1 : 7.00 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:08:14.118 =================================================================================================================== 00:08:14.118 Total : 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:08:14.118 00:08:15.494 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.494 Nvme0n1 : 8.00 7135.88 27.87 0.00 0.00 0.00 0.00 0.00 00:08:15.494 =================================================================================================================== 00:08:15.494 Total : 7135.88 27.87 0.00 0.00 0.00 0.00 0.00 00:08:15.494 00:08:16.431 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.431 Nvme0n1 : 9.00 7133.22 27.86 0.00 0.00 0.00 0.00 0.00 00:08:16.431 =================================================================================================================== 00:08:16.431 Total : 7133.22 27.86 0.00 0.00 0.00 0.00 0.00 00:08:16.431 00:08:17.367 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.367 Nvme0n1 : 10.00 7105.70 27.76 0.00 0.00 0.00 0.00 0.00 00:08:17.367 =================================================================================================================== 00:08:17.367 Total : 7105.70 27.76 0.00 0.00 0.00 0.00 0.00 00:08:17.367 00:08:17.367 00:08:17.367 Latency(us) 00:08:17.367 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:17.367 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.367 Nvme0n1 : 10.01 7107.92 27.77 0.00 0.00 18003.92 9234.62 40513.16 00:08:17.367 =================================================================================================================== 00:08:17.367 Total : 7107.92 27.77 0.00 0.00 18003.92 9234.62 40513.16 00:08:17.367 0 00:08:17.367 15:31:18 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 65921 00:08:17.367 15:31:18 -- common/autotest_common.sh@936 -- # '[' -z 65921 ']' 00:08:17.367 15:31:18 -- common/autotest_common.sh@940 -- # kill -0 65921 00:08:17.367 15:31:18 -- common/autotest_common.sh@941 -- # uname 00:08:17.367 15:31:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:17.367 15:31:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65921 00:08:17.367 15:31:18 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:08:17.367 killing process with pid 65921 00:08:17.367 15:31:18 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:08:17.367 15:31:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65921' 00:08:17.367 Received shutdown signal, test time was about 10.000000 seconds 00:08:17.367 00:08:17.367 Latency(us) 00:08:17.367 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:17.367 =================================================================================================================== 00:08:17.367 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:17.367 15:31:18 -- common/autotest_common.sh@955 -- # kill 65921 00:08:17.367 15:31:18 -- common/autotest_common.sh@960 -- # wait 65921 00:08:17.625 15:31:18 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:17.884 15:31:19 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4f75df9-42cf-4a28-92e7-35361c2c4a8e 00:08:17.884 15:31:19 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:08:18.143 15:31:19 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:08:18.143 15:31:19 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:08:18.143 15:31:19 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:18.402 [2024-04-17 15:31:19.623279] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:18.402 15:31:19 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4f75df9-42cf-4a28-92e7-35361c2c4a8e 00:08:18.402 15:31:19 -- common/autotest_common.sh@638 -- # local es=0 00:08:18.402 15:31:19 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4f75df9-42cf-4a28-92e7-35361c2c4a8e 00:08:18.402 15:31:19 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:18.402 15:31:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:18.402 15:31:19 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:18.402 15:31:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:18.402 15:31:19 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:18.402 15:31:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:18.402 15:31:19 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:18.402 15:31:19 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:18.402 15:31:19 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4f75df9-42cf-4a28-92e7-35361c2c4a8e 00:08:18.661 request: 00:08:18.661 { 00:08:18.661 "uuid": "c4f75df9-42cf-4a28-92e7-35361c2c4a8e", 00:08:18.661 "method": "bdev_lvol_get_lvstores", 00:08:18.661 "req_id": 1 00:08:18.661 } 00:08:18.661 Got JSON-RPC error response 00:08:18.661 response: 00:08:18.661 { 00:08:18.661 "code": -19, 00:08:18.661 "message": "No such device" 00:08:18.661 } 00:08:18.661 15:31:19 -- common/autotest_common.sh@641 -- # es=1 00:08:18.661 15:31:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:18.661 15:31:19 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:18.661 15:31:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:18.661 15:31:19 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:18.661 aio_bdev 00:08:18.921 15:31:20 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 76fd65e3-fbc8-4808-bf1b-0c4d4b84b946 00:08:18.921 15:31:20 -- common/autotest_common.sh@885 -- # local bdev_name=76fd65e3-fbc8-4808-bf1b-0c4d4b84b946 00:08:18.921 15:31:20 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:08:18.921 15:31:20 -- common/autotest_common.sh@887 -- # local i 00:08:18.921 15:31:20 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:08:18.921 15:31:20 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:08:18.921 15:31:20 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:18.921 15:31:20 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 76fd65e3-fbc8-4808-bf1b-0c4d4b84b946 -t 2000 00:08:19.180 [ 00:08:19.180 { 00:08:19.180 "name": "76fd65e3-fbc8-4808-bf1b-0c4d4b84b946", 00:08:19.180 "aliases": [ 00:08:19.180 "lvs/lvol" 00:08:19.180 ], 00:08:19.180 "product_name": "Logical Volume", 00:08:19.180 "block_size": 4096, 00:08:19.180 "num_blocks": 38912, 00:08:19.180 "uuid": "76fd65e3-fbc8-4808-bf1b-0c4d4b84b946", 00:08:19.180 "assigned_rate_limits": { 00:08:19.180 "rw_ios_per_sec": 0, 00:08:19.180 "rw_mbytes_per_sec": 0, 00:08:19.180 "r_mbytes_per_sec": 0, 00:08:19.180 "w_mbytes_per_sec": 0 00:08:19.180 }, 00:08:19.180 "claimed": false, 00:08:19.180 "zoned": false, 00:08:19.180 "supported_io_types": { 00:08:19.180 "read": true, 00:08:19.180 "write": true, 00:08:19.180 "unmap": true, 00:08:19.180 "write_zeroes": true, 00:08:19.180 "flush": false, 00:08:19.180 "reset": true, 00:08:19.180 "compare": false, 00:08:19.180 "compare_and_write": false, 00:08:19.180 "abort": false, 00:08:19.180 "nvme_admin": false, 00:08:19.180 "nvme_io": false 00:08:19.180 }, 00:08:19.180 "driver_specific": { 00:08:19.180 "lvol": { 00:08:19.180 "lvol_store_uuid": "c4f75df9-42cf-4a28-92e7-35361c2c4a8e", 00:08:19.180 "base_bdev": "aio_bdev", 00:08:19.180 "thin_provision": false, 00:08:19.180 "snapshot": false, 00:08:19.180 "clone": false, 00:08:19.180 "esnap_clone": false 00:08:19.180 } 00:08:19.180 } 00:08:19.180 } 00:08:19.180 ] 00:08:19.180 15:31:20 -- common/autotest_common.sh@893 -- # return 0 00:08:19.180 15:31:20 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4f75df9-42cf-4a28-92e7-35361c2c4a8e 00:08:19.180 15:31:20 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:08:19.439 15:31:20 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:08:19.439 15:31:20 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4f75df9-42cf-4a28-92e7-35361c2c4a8e 00:08:19.439 15:31:20 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:08:19.698 15:31:21 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:08:19.698 15:31:21 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 76fd65e3-fbc8-4808-bf1b-0c4d4b84b946 00:08:19.956 15:31:21 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c4f75df9-42cf-4a28-92e7-35361c2c4a8e 00:08:20.215 15:31:21 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:20.474 15:31:21 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:20.742 00:08:20.742 real 0m17.690s 00:08:20.742 user 0m16.484s 00:08:20.742 sys 0m2.598s 00:08:20.742 15:31:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:20.742 15:31:22 -- common/autotest_common.sh@10 -- # set +x 00:08:20.742 ************************************ 00:08:20.742 END TEST lvs_grow_clean 00:08:20.742 ************************************ 00:08:20.742 15:31:22 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:20.742 15:31:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:20.742 15:31:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:20.742 15:31:22 -- common/autotest_common.sh@10 -- # set +x 00:08:21.017 ************************************ 00:08:21.017 START TEST lvs_grow_dirty 00:08:21.017 ************************************ 00:08:21.017 15:31:22 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:08:21.017 15:31:22 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:21.017 15:31:22 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:21.017 15:31:22 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:21.017 15:31:22 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:21.017 15:31:22 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:21.017 15:31:22 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:21.017 15:31:22 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:21.017 15:31:22 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:21.017 15:31:22 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:21.276 15:31:22 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:21.276 15:31:22 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:21.535 15:31:22 -- target/nvmf_lvs_grow.sh@28 -- # lvs=e2439c4a-a5bc-43e5-9c83-456bb162f352 00:08:21.535 15:31:22 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:21.535 15:31:22 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2439c4a-a5bc-43e5-9c83-456bb162f352 00:08:21.794 15:31:23 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:21.794 15:31:23 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:21.794 15:31:23 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e2439c4a-a5bc-43e5-9c83-456bb162f352 lvol 150 00:08:22.053 15:31:23 -- target/nvmf_lvs_grow.sh@33 -- # lvol=b92206ba-5f5d-4198-9f78-5b5d9a91c96a 00:08:22.053 15:31:23 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:22.053 15:31:23 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:22.311 [2024-04-17 15:31:23.576477] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:22.311 [2024-04-17 15:31:23.576607] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:22.311 true 00:08:22.311 15:31:23 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2439c4a-a5bc-43e5-9c83-456bb162f352 00:08:22.311 15:31:23 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:22.570 15:31:23 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:22.570 15:31:23 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:22.829 15:31:24 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b92206ba-5f5d-4198-9f78-5b5d9a91c96a 00:08:23.099 15:31:24 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:23.365 15:31:24 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:23.365 15:31:24 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:23.365 15:31:24 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66189 00:08:23.365 15:31:24 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:23.365 15:31:24 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66189 /var/tmp/bdevperf.sock 00:08:23.365 15:31:24 -- common/autotest_common.sh@817 -- # '[' -z 66189 ']' 00:08:23.365 15:31:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:23.365 15:31:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:23.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:23.365 15:31:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:23.365 15:31:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:23.365 15:31:24 -- common/autotest_common.sh@10 -- # set +x 00:08:23.625 [2024-04-17 15:31:24.834900] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:08:23.625 [2024-04-17 15:31:24.835021] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66189 ] 00:08:23.625 [2024-04-17 15:31:24.973110] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.884 [2024-04-17 15:31:25.122472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:24.451 15:31:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:24.451 15:31:25 -- common/autotest_common.sh@850 -- # return 0 00:08:24.451 15:31:25 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:24.710 Nvme0n1 00:08:24.710 15:31:26 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:24.970 [ 00:08:24.970 { 00:08:24.970 "name": "Nvme0n1", 00:08:24.970 "aliases": [ 00:08:24.970 "b92206ba-5f5d-4198-9f78-5b5d9a91c96a" 00:08:24.970 ], 00:08:24.970 "product_name": "NVMe disk", 00:08:24.970 "block_size": 4096, 00:08:24.970 "num_blocks": 38912, 00:08:24.970 "uuid": "b92206ba-5f5d-4198-9f78-5b5d9a91c96a", 00:08:24.970 "assigned_rate_limits": { 00:08:24.970 "rw_ios_per_sec": 0, 00:08:24.970 "rw_mbytes_per_sec": 0, 00:08:24.970 "r_mbytes_per_sec": 0, 00:08:24.970 "w_mbytes_per_sec": 0 00:08:24.970 }, 00:08:24.970 "claimed": false, 00:08:24.970 "zoned": false, 00:08:24.970 "supported_io_types": { 00:08:24.970 "read": true, 00:08:24.970 "write": true, 00:08:24.970 "unmap": true, 00:08:24.970 "write_zeroes": true, 00:08:24.970 "flush": true, 00:08:24.970 "reset": true, 00:08:24.970 "compare": true, 00:08:24.970 "compare_and_write": true, 00:08:24.970 "abort": true, 00:08:24.970 "nvme_admin": true, 00:08:24.970 "nvme_io": true 00:08:24.970 }, 00:08:24.970 "memory_domains": [ 00:08:24.970 { 00:08:24.970 "dma_device_id": "system", 00:08:24.970 "dma_device_type": 1 00:08:24.970 } 00:08:24.970 ], 00:08:24.970 "driver_specific": { 00:08:24.970 "nvme": [ 00:08:24.970 { 00:08:24.970 "trid": { 00:08:24.970 "trtype": "TCP", 00:08:24.970 "adrfam": "IPv4", 00:08:24.970 "traddr": "10.0.0.2", 00:08:24.970 "trsvcid": "4420", 00:08:24.970 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:24.970 }, 00:08:24.970 "ctrlr_data": { 00:08:24.970 "cntlid": 1, 00:08:24.970 "vendor_id": "0x8086", 00:08:24.970 "model_number": "SPDK bdev Controller", 00:08:24.970 "serial_number": "SPDK0", 00:08:24.970 "firmware_revision": "24.05", 00:08:24.970 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:24.970 "oacs": { 00:08:24.970 "security": 0, 00:08:24.970 "format": 0, 00:08:24.970 "firmware": 0, 00:08:24.970 "ns_manage": 0 00:08:24.970 }, 00:08:24.970 "multi_ctrlr": true, 00:08:24.970 "ana_reporting": false 00:08:24.970 }, 00:08:24.970 "vs": { 00:08:24.970 "nvme_version": "1.3" 00:08:24.970 }, 00:08:24.970 "ns_data": { 00:08:24.970 "id": 1, 00:08:24.970 "can_share": true 00:08:24.970 } 00:08:24.970 } 00:08:24.970 ], 00:08:24.970 "mp_policy": "active_passive" 00:08:24.970 } 00:08:24.970 } 00:08:24.970 ] 00:08:24.970 15:31:26 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66212 00:08:24.970 15:31:26 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:24.970 15:31:26 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:24.970 Running I/O for 10 seconds... 00:08:26.347 Latency(us) 00:08:26.347 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:26.347 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.347 Nvme0n1 : 1.00 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:08:26.347 =================================================================================================================== 00:08:26.347 Total : 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:08:26.347 00:08:26.915 15:31:28 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e2439c4a-a5bc-43e5-9c83-456bb162f352 00:08:27.175 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:27.175 Nvme0n1 : 2.00 6921.50 27.04 0.00 0.00 0.00 0.00 0.00 00:08:27.175 =================================================================================================================== 00:08:27.175 Total : 6921.50 27.04 0.00 0.00 0.00 0.00 0.00 00:08:27.175 00:08:27.175 true 00:08:27.175 15:31:28 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2439c4a-a5bc-43e5-9c83-456bb162f352 00:08:27.175 15:31:28 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:27.743 15:31:28 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:27.743 15:31:28 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:27.743 15:31:28 -- target/nvmf_lvs_grow.sh@65 -- # wait 66212 00:08:28.022 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.022 Nvme0n1 : 3.00 7027.33 27.45 0.00 0.00 0.00 0.00 0.00 00:08:28.022 =================================================================================================================== 00:08:28.022 Total : 7027.33 27.45 0.00 0.00 0.00 0.00 0.00 00:08:28.022 00:08:28.958 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.958 Nvme0n1 : 4.00 7080.25 27.66 0.00 0.00 0.00 0.00 0.00 00:08:28.958 =================================================================================================================== 00:08:28.958 Total : 7080.25 27.66 0.00 0.00 0.00 0.00 0.00 00:08:28.958 00:08:30.335 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.335 Nvme0n1 : 5.00 7035.80 27.48 0.00 0.00 0.00 0.00 0.00 00:08:30.335 =================================================================================================================== 00:08:30.335 Total : 7035.80 27.48 0.00 0.00 0.00 0.00 0.00 00:08:30.335 00:08:31.271 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.271 Nvme0n1 : 6.00 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:08:31.271 =================================================================================================================== 00:08:31.271 Total : 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:08:31.271 00:08:32.214 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.214 Nvme0n1 : 7.00 6976.14 27.25 0.00 0.00 0.00 0.00 0.00 00:08:32.214 =================================================================================================================== 00:08:32.214 Total : 6976.14 27.25 0.00 0.00 0.00 0.00 0.00 00:08:32.214 00:08:33.149 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.149 Nvme0n1 : 8.00 6850.12 26.76 0.00 0.00 0.00 0.00 0.00 00:08:33.149 =================================================================================================================== 00:08:33.149 Total : 6850.12 26.76 0.00 0.00 0.00 0.00 0.00 00:08:33.149 00:08:34.085 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.085 Nvme0n1 : 9.00 6836.89 26.71 0.00 0.00 0.00 0.00 0.00 00:08:34.085 =================================================================================================================== 00:08:34.085 Total : 6836.89 26.71 0.00 0.00 0.00 0.00 0.00 00:08:34.085 00:08:35.022 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.022 Nvme0n1 : 10.00 6839.00 26.71 0.00 0.00 0.00 0.00 0.00 00:08:35.022 =================================================================================================================== 00:08:35.022 Total : 6839.00 26.71 0.00 0.00 0.00 0.00 0.00 00:08:35.022 00:08:35.022 00:08:35.022 Latency(us) 00:08:35.022 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:35.022 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.022 Nvme0n1 : 10.02 6839.91 26.72 0.00 0.00 18707.86 12749.73 148707.14 00:08:35.022 =================================================================================================================== 00:08:35.022 Total : 6839.91 26.72 0.00 0.00 18707.86 12749.73 148707.14 00:08:35.022 0 00:08:35.022 15:31:36 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66189 00:08:35.022 15:31:36 -- common/autotest_common.sh@936 -- # '[' -z 66189 ']' 00:08:35.022 15:31:36 -- common/autotest_common.sh@940 -- # kill -0 66189 00:08:35.022 15:31:36 -- common/autotest_common.sh@941 -- # uname 00:08:35.022 15:31:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:35.022 15:31:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66189 00:08:35.022 15:31:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:08:35.022 15:31:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:08:35.022 killing process with pid 66189 00:08:35.022 15:31:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66189' 00:08:35.022 15:31:36 -- common/autotest_common.sh@955 -- # kill 66189 00:08:35.022 Received shutdown signal, test time was about 10.000000 seconds 00:08:35.022 00:08:35.022 Latency(us) 00:08:35.022 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:35.022 =================================================================================================================== 00:08:35.022 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:35.022 15:31:36 -- common/autotest_common.sh@960 -- # wait 66189 00:08:35.597 15:31:36 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:35.856 15:31:37 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2439c4a-a5bc-43e5-9c83-456bb162f352 00:08:35.856 15:31:37 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:08:36.114 15:31:37 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:08:36.114 15:31:37 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:08:36.114 15:31:37 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 65835 00:08:36.115 15:31:37 -- target/nvmf_lvs_grow.sh@74 -- # wait 65835 00:08:36.115 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 65835 Killed "${NVMF_APP[@]}" "$@" 00:08:36.115 15:31:37 -- target/nvmf_lvs_grow.sh@74 -- # true 00:08:36.115 15:31:37 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:08:36.115 15:31:37 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:36.115 15:31:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:36.115 15:31:37 -- common/autotest_common.sh@10 -- # set +x 00:08:36.115 15:31:37 -- nvmf/common.sh@470 -- # nvmfpid=66344 00:08:36.115 15:31:37 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:36.115 15:31:37 -- nvmf/common.sh@471 -- # waitforlisten 66344 00:08:36.115 15:31:37 -- common/autotest_common.sh@817 -- # '[' -z 66344 ']' 00:08:36.115 15:31:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.115 15:31:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:36.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.115 15:31:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.115 15:31:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:36.115 15:31:37 -- common/autotest_common.sh@10 -- # set +x 00:08:36.115 [2024-04-17 15:31:37.396938] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:08:36.115 [2024-04-17 15:31:37.397027] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:36.115 [2024-04-17 15:31:37.528798] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.373 [2024-04-17 15:31:37.670570] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:36.373 [2024-04-17 15:31:37.670634] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:36.373 [2024-04-17 15:31:37.670661] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:36.373 [2024-04-17 15:31:37.670670] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:36.373 [2024-04-17 15:31:37.670677] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:36.373 [2024-04-17 15:31:37.670710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.941 15:31:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:36.941 15:31:38 -- common/autotest_common.sh@850 -- # return 0 00:08:36.941 15:31:38 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:36.941 15:31:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:36.941 15:31:38 -- common/autotest_common.sh@10 -- # set +x 00:08:37.200 15:31:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:37.200 15:31:38 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:37.200 [2024-04-17 15:31:38.578987] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:37.200 [2024-04-17 15:31:38.579223] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:37.200 [2024-04-17 15:31:38.579527] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:37.200 15:31:38 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:08:37.200 15:31:38 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev b92206ba-5f5d-4198-9f78-5b5d9a91c96a 00:08:37.200 15:31:38 -- common/autotest_common.sh@885 -- # local bdev_name=b92206ba-5f5d-4198-9f78-5b5d9a91c96a 00:08:37.200 15:31:38 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:08:37.200 15:31:38 -- common/autotest_common.sh@887 -- # local i 00:08:37.200 15:31:38 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:08:37.200 15:31:38 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:08:37.200 15:31:38 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:37.458 15:31:38 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b92206ba-5f5d-4198-9f78-5b5d9a91c96a -t 2000 00:08:37.717 [ 00:08:37.717 { 00:08:37.717 "name": "b92206ba-5f5d-4198-9f78-5b5d9a91c96a", 00:08:37.717 "aliases": [ 00:08:37.717 "lvs/lvol" 00:08:37.717 ], 00:08:37.717 "product_name": "Logical Volume", 00:08:37.717 "block_size": 4096, 00:08:37.717 "num_blocks": 38912, 00:08:37.717 "uuid": "b92206ba-5f5d-4198-9f78-5b5d9a91c96a", 00:08:37.718 "assigned_rate_limits": { 00:08:37.718 "rw_ios_per_sec": 0, 00:08:37.718 "rw_mbytes_per_sec": 0, 00:08:37.718 "r_mbytes_per_sec": 0, 00:08:37.718 "w_mbytes_per_sec": 0 00:08:37.718 }, 00:08:37.718 "claimed": false, 00:08:37.718 "zoned": false, 00:08:37.718 "supported_io_types": { 00:08:37.718 "read": true, 00:08:37.718 "write": true, 00:08:37.718 "unmap": true, 00:08:37.718 "write_zeroes": true, 00:08:37.718 "flush": false, 00:08:37.718 "reset": true, 00:08:37.718 "compare": false, 00:08:37.718 "compare_and_write": false, 00:08:37.718 "abort": false, 00:08:37.718 "nvme_admin": false, 00:08:37.718 "nvme_io": false 00:08:37.718 }, 00:08:37.718 "driver_specific": { 00:08:37.718 "lvol": { 00:08:37.718 "lvol_store_uuid": "e2439c4a-a5bc-43e5-9c83-456bb162f352", 00:08:37.718 "base_bdev": "aio_bdev", 00:08:37.718 "thin_provision": false, 00:08:37.718 "snapshot": false, 00:08:37.718 "clone": false, 00:08:37.718 "esnap_clone": false 00:08:37.718 } 00:08:37.718 } 00:08:37.718 } 00:08:37.718 ] 00:08:37.718 15:31:39 -- common/autotest_common.sh@893 -- # return 0 00:08:37.718 15:31:39 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2439c4a-a5bc-43e5-9c83-456bb162f352 00:08:37.718 15:31:39 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:08:37.976 15:31:39 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:08:37.976 15:31:39 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2439c4a-a5bc-43e5-9c83-456bb162f352 00:08:37.976 15:31:39 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:08:38.235 15:31:39 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:08:38.235 15:31:39 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:38.494 [2024-04-17 15:31:39.684201] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:38.494 15:31:39 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2439c4a-a5bc-43e5-9c83-456bb162f352 00:08:38.494 15:31:39 -- common/autotest_common.sh@638 -- # local es=0 00:08:38.494 15:31:39 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2439c4a-a5bc-43e5-9c83-456bb162f352 00:08:38.494 15:31:39 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:38.494 15:31:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:38.494 15:31:39 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:38.494 15:31:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:38.494 15:31:39 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:38.494 15:31:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:38.494 15:31:39 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:38.495 15:31:39 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:38.495 15:31:39 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2439c4a-a5bc-43e5-9c83-456bb162f352 00:08:38.753 request: 00:08:38.754 { 00:08:38.754 "uuid": "e2439c4a-a5bc-43e5-9c83-456bb162f352", 00:08:38.754 "method": "bdev_lvol_get_lvstores", 00:08:38.754 "req_id": 1 00:08:38.754 } 00:08:38.754 Got JSON-RPC error response 00:08:38.754 response: 00:08:38.754 { 00:08:38.754 "code": -19, 00:08:38.754 "message": "No such device" 00:08:38.754 } 00:08:38.754 15:31:39 -- common/autotest_common.sh@641 -- # es=1 00:08:38.754 15:31:39 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:38.754 15:31:39 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:38.754 15:31:39 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:38.754 15:31:39 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:39.013 aio_bdev 00:08:39.013 15:31:40 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev b92206ba-5f5d-4198-9f78-5b5d9a91c96a 00:08:39.013 15:31:40 -- common/autotest_common.sh@885 -- # local bdev_name=b92206ba-5f5d-4198-9f78-5b5d9a91c96a 00:08:39.013 15:31:40 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:08:39.013 15:31:40 -- common/autotest_common.sh@887 -- # local i 00:08:39.013 15:31:40 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:08:39.013 15:31:40 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:08:39.013 15:31:40 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:39.272 15:31:40 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b92206ba-5f5d-4198-9f78-5b5d9a91c96a -t 2000 00:08:39.272 [ 00:08:39.272 { 00:08:39.272 "name": "b92206ba-5f5d-4198-9f78-5b5d9a91c96a", 00:08:39.272 "aliases": [ 00:08:39.272 "lvs/lvol" 00:08:39.272 ], 00:08:39.272 "product_name": "Logical Volume", 00:08:39.272 "block_size": 4096, 00:08:39.272 "num_blocks": 38912, 00:08:39.272 "uuid": "b92206ba-5f5d-4198-9f78-5b5d9a91c96a", 00:08:39.272 "assigned_rate_limits": { 00:08:39.272 "rw_ios_per_sec": 0, 00:08:39.272 "rw_mbytes_per_sec": 0, 00:08:39.272 "r_mbytes_per_sec": 0, 00:08:39.272 "w_mbytes_per_sec": 0 00:08:39.272 }, 00:08:39.272 "claimed": false, 00:08:39.272 "zoned": false, 00:08:39.272 "supported_io_types": { 00:08:39.272 "read": true, 00:08:39.272 "write": true, 00:08:39.272 "unmap": true, 00:08:39.272 "write_zeroes": true, 00:08:39.272 "flush": false, 00:08:39.272 "reset": true, 00:08:39.272 "compare": false, 00:08:39.272 "compare_and_write": false, 00:08:39.272 "abort": false, 00:08:39.272 "nvme_admin": false, 00:08:39.272 "nvme_io": false 00:08:39.272 }, 00:08:39.272 "driver_specific": { 00:08:39.272 "lvol": { 00:08:39.272 "lvol_store_uuid": "e2439c4a-a5bc-43e5-9c83-456bb162f352", 00:08:39.272 "base_bdev": "aio_bdev", 00:08:39.272 "thin_provision": false, 00:08:39.272 "snapshot": false, 00:08:39.272 "clone": false, 00:08:39.272 "esnap_clone": false 00:08:39.272 } 00:08:39.272 } 00:08:39.272 } 00:08:39.272 ] 00:08:39.272 15:31:40 -- common/autotest_common.sh@893 -- # return 0 00:08:39.272 15:31:40 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2439c4a-a5bc-43e5-9c83-456bb162f352 00:08:39.272 15:31:40 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:08:39.531 15:31:40 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:08:39.531 15:31:40 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2439c4a-a5bc-43e5-9c83-456bb162f352 00:08:39.531 15:31:40 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:08:39.790 15:31:41 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:08:39.790 15:31:41 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete b92206ba-5f5d-4198-9f78-5b5d9a91c96a 00:08:40.048 15:31:41 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e2439c4a-a5bc-43e5-9c83-456bb162f352 00:08:40.307 15:31:41 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:40.565 15:31:41 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:40.823 00:08:40.823 real 0m20.044s 00:08:40.823 user 0m41.584s 00:08:40.823 sys 0m8.764s 00:08:40.823 15:31:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:40.823 15:31:42 -- common/autotest_common.sh@10 -- # set +x 00:08:40.823 ************************************ 00:08:40.823 END TEST lvs_grow_dirty 00:08:40.823 ************************************ 00:08:41.082 15:31:42 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:41.082 15:31:42 -- common/autotest_common.sh@794 -- # type=--id 00:08:41.082 15:31:42 -- common/autotest_common.sh@795 -- # id=0 00:08:41.082 15:31:42 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:08:41.082 15:31:42 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:41.082 15:31:42 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:08:41.082 15:31:42 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:08:41.082 15:31:42 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:08:41.082 15:31:42 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:41.082 nvmf_trace.0 00:08:41.082 15:31:42 -- common/autotest_common.sh@809 -- # return 0 00:08:41.082 15:31:42 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:41.082 15:31:42 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:41.082 15:31:42 -- nvmf/common.sh@117 -- # sync 00:08:41.341 15:31:42 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:41.341 15:31:42 -- nvmf/common.sh@120 -- # set +e 00:08:41.341 15:31:42 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:41.341 15:31:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:41.341 rmmod nvme_tcp 00:08:41.341 rmmod nvme_fabrics 00:08:41.341 rmmod nvme_keyring 00:08:41.341 15:31:42 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:41.341 15:31:42 -- nvmf/common.sh@124 -- # set -e 00:08:41.341 15:31:42 -- nvmf/common.sh@125 -- # return 0 00:08:41.341 15:31:42 -- nvmf/common.sh@478 -- # '[' -n 66344 ']' 00:08:41.341 15:31:42 -- nvmf/common.sh@479 -- # killprocess 66344 00:08:41.341 15:31:42 -- common/autotest_common.sh@936 -- # '[' -z 66344 ']' 00:08:41.341 15:31:42 -- common/autotest_common.sh@940 -- # kill -0 66344 00:08:41.341 15:31:42 -- common/autotest_common.sh@941 -- # uname 00:08:41.341 15:31:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:41.341 15:31:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66344 00:08:41.341 15:31:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:41.341 15:31:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:41.341 killing process with pid 66344 00:08:41.341 15:31:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66344' 00:08:41.341 15:31:42 -- common/autotest_common.sh@955 -- # kill 66344 00:08:41.341 15:31:42 -- common/autotest_common.sh@960 -- # wait 66344 00:08:41.600 15:31:42 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:41.600 15:31:42 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:41.600 15:31:42 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:41.600 15:31:42 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:41.600 15:31:42 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:41.600 15:31:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.600 15:31:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:41.600 15:31:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.600 15:31:43 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:41.600 00:08:41.600 real 0m40.486s 00:08:41.600 user 1m4.206s 00:08:41.600 sys 0m12.258s 00:08:41.600 15:31:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:41.600 15:31:43 -- common/autotest_common.sh@10 -- # set +x 00:08:41.600 ************************************ 00:08:41.600 END TEST nvmf_lvs_grow 00:08:41.600 ************************************ 00:08:41.865 15:31:43 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:41.865 15:31:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:41.865 15:31:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:41.865 15:31:43 -- common/autotest_common.sh@10 -- # set +x 00:08:41.865 ************************************ 00:08:41.865 START TEST nvmf_bdev_io_wait 00:08:41.865 ************************************ 00:08:41.865 15:31:43 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:41.865 * Looking for test storage... 00:08:41.865 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:41.865 15:31:43 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:41.865 15:31:43 -- nvmf/common.sh@7 -- # uname -s 00:08:41.865 15:31:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:41.865 15:31:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:41.865 15:31:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:41.865 15:31:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:41.865 15:31:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:41.865 15:31:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:41.865 15:31:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:41.865 15:31:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:41.865 15:31:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:41.865 15:31:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:41.865 15:31:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02dfa913-00e4-4a25-ab2c-855f7283d4db 00:08:41.865 15:31:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=02dfa913-00e4-4a25-ab2c-855f7283d4db 00:08:41.865 15:31:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:41.865 15:31:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:41.865 15:31:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:41.865 15:31:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:41.865 15:31:43 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:41.865 15:31:43 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:41.865 15:31:43 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:41.865 15:31:43 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:41.865 15:31:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.865 15:31:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.865 15:31:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.865 15:31:43 -- paths/export.sh@5 -- # export PATH 00:08:41.865 15:31:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.865 15:31:43 -- nvmf/common.sh@47 -- # : 0 00:08:41.865 15:31:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:41.865 15:31:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:41.865 15:31:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:41.865 15:31:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:41.865 15:31:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:41.865 15:31:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:41.865 15:31:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:41.865 15:31:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:41.865 15:31:43 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:41.865 15:31:43 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:41.865 15:31:43 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:41.865 15:31:43 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:41.865 15:31:43 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:41.865 15:31:43 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:41.865 15:31:43 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:41.865 15:31:43 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:41.865 15:31:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.865 15:31:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:41.865 15:31:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.865 15:31:43 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:08:41.865 15:31:43 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:08:41.865 15:31:43 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:08:41.865 15:31:43 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:08:41.865 15:31:43 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:08:41.865 15:31:43 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:08:41.865 15:31:43 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:41.865 15:31:43 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:41.865 15:31:43 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:41.865 15:31:43 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:41.865 15:31:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:41.865 15:31:43 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:41.865 15:31:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:41.865 15:31:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:41.865 15:31:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:41.865 15:31:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:41.865 15:31:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:41.865 15:31:43 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:41.865 15:31:43 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:41.865 15:31:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:41.865 Cannot find device "nvmf_tgt_br" 00:08:41.865 15:31:43 -- nvmf/common.sh@155 -- # true 00:08:41.865 15:31:43 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:41.865 Cannot find device "nvmf_tgt_br2" 00:08:41.865 15:31:43 -- nvmf/common.sh@156 -- # true 00:08:41.865 15:31:43 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:41.865 15:31:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:42.135 Cannot find device "nvmf_tgt_br" 00:08:42.135 15:31:43 -- nvmf/common.sh@158 -- # true 00:08:42.135 15:31:43 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:42.135 Cannot find device "nvmf_tgt_br2" 00:08:42.135 15:31:43 -- nvmf/common.sh@159 -- # true 00:08:42.135 15:31:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:42.135 15:31:43 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:42.135 15:31:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:42.135 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:42.135 15:31:43 -- nvmf/common.sh@162 -- # true 00:08:42.135 15:31:43 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:42.135 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:42.135 15:31:43 -- nvmf/common.sh@163 -- # true 00:08:42.135 15:31:43 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:42.135 15:31:43 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:42.135 15:31:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:42.135 15:31:43 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:42.135 15:31:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:42.135 15:31:43 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:42.135 15:31:43 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:42.135 15:31:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:42.135 15:31:43 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:42.135 15:31:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:42.135 15:31:43 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:42.135 15:31:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:42.135 15:31:43 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:42.135 15:31:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:42.135 15:31:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:42.135 15:31:43 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:42.135 15:31:43 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:42.135 15:31:43 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:42.135 15:31:43 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:42.135 15:31:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:42.135 15:31:43 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:42.135 15:31:43 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:42.394 15:31:43 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:42.394 15:31:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:42.394 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:42.394 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:08:42.394 00:08:42.394 --- 10.0.0.2 ping statistics --- 00:08:42.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.394 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:08:42.394 15:31:43 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:42.394 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:42.394 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:08:42.394 00:08:42.394 --- 10.0.0.3 ping statistics --- 00:08:42.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.394 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:08:42.394 15:31:43 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:42.394 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:42.394 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:08:42.394 00:08:42.394 --- 10.0.0.1 ping statistics --- 00:08:42.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.394 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:08:42.394 15:31:43 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:42.394 15:31:43 -- nvmf/common.sh@422 -- # return 0 00:08:42.394 15:31:43 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:42.394 15:31:43 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:42.394 15:31:43 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:42.394 15:31:43 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:42.394 15:31:43 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:42.394 15:31:43 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:42.394 15:31:43 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:42.394 15:31:43 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:42.394 15:31:43 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:42.394 15:31:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:42.394 15:31:43 -- common/autotest_common.sh@10 -- # set +x 00:08:42.394 15:31:43 -- nvmf/common.sh@470 -- # nvmfpid=66666 00:08:42.394 15:31:43 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:42.394 15:31:43 -- nvmf/common.sh@471 -- # waitforlisten 66666 00:08:42.394 15:31:43 -- common/autotest_common.sh@817 -- # '[' -z 66666 ']' 00:08:42.394 15:31:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.394 15:31:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:42.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.394 15:31:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.394 15:31:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:42.394 15:31:43 -- common/autotest_common.sh@10 -- # set +x 00:08:42.394 [2024-04-17 15:31:43.681817] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:08:42.394 [2024-04-17 15:31:43.681971] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:42.394 [2024-04-17 15:31:43.820473] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:42.652 [2024-04-17 15:31:43.943233] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:42.652 [2024-04-17 15:31:43.943608] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:42.652 [2024-04-17 15:31:43.943743] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:42.652 [2024-04-17 15:31:43.943945] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:42.652 [2024-04-17 15:31:43.943979] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:42.652 [2024-04-17 15:31:43.944184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:42.652 [2024-04-17 15:31:43.944492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:42.652 [2024-04-17 15:31:43.944495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.652 [2024-04-17 15:31:43.944331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:43.218 15:31:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:43.218 15:31:44 -- common/autotest_common.sh@850 -- # return 0 00:08:43.218 15:31:44 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:43.218 15:31:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:43.218 15:31:44 -- common/autotest_common.sh@10 -- # set +x 00:08:43.478 15:31:44 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:43.478 15:31:44 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:43.478 15:31:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:43.478 15:31:44 -- common/autotest_common.sh@10 -- # set +x 00:08:43.478 15:31:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:43.478 15:31:44 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:43.478 15:31:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:43.478 15:31:44 -- common/autotest_common.sh@10 -- # set +x 00:08:43.478 15:31:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:43.478 15:31:44 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:43.478 15:31:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:43.478 15:31:44 -- common/autotest_common.sh@10 -- # set +x 00:08:43.478 [2024-04-17 15:31:44.766247] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:43.478 15:31:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:43.478 15:31:44 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:43.478 15:31:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:43.478 15:31:44 -- common/autotest_common.sh@10 -- # set +x 00:08:43.478 Malloc0 00:08:43.478 15:31:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:43.478 15:31:44 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:43.478 15:31:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:43.478 15:31:44 -- common/autotest_common.sh@10 -- # set +x 00:08:43.478 15:31:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:43.478 15:31:44 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:43.478 15:31:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:43.478 15:31:44 -- common/autotest_common.sh@10 -- # set +x 00:08:43.478 15:31:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:43.478 15:31:44 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:43.478 15:31:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:43.478 15:31:44 -- common/autotest_common.sh@10 -- # set +x 00:08:43.478 [2024-04-17 15:31:44.846437] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:43.478 15:31:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:43.478 15:31:44 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=66707 00:08:43.478 15:31:44 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:43.478 15:31:44 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:43.478 15:31:44 -- nvmf/common.sh@521 -- # config=() 00:08:43.478 15:31:44 -- nvmf/common.sh@521 -- # local subsystem config 00:08:43.478 15:31:44 -- target/bdev_io_wait.sh@30 -- # READ_PID=66709 00:08:43.478 15:31:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:08:43.478 15:31:44 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:43.478 15:31:44 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:43.478 15:31:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:08:43.478 { 00:08:43.478 "params": { 00:08:43.478 "name": "Nvme$subsystem", 00:08:43.478 "trtype": "$TEST_TRANSPORT", 00:08:43.478 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:43.478 "adrfam": "ipv4", 00:08:43.478 "trsvcid": "$NVMF_PORT", 00:08:43.478 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:43.478 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:43.478 "hdgst": ${hdgst:-false}, 00:08:43.478 "ddgst": ${ddgst:-false} 00:08:43.478 }, 00:08:43.478 "method": "bdev_nvme_attach_controller" 00:08:43.478 } 00:08:43.478 EOF 00:08:43.478 )") 00:08:43.478 15:31:44 -- nvmf/common.sh@521 -- # config=() 00:08:43.478 15:31:44 -- nvmf/common.sh@521 -- # local subsystem config 00:08:43.478 15:31:44 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=66711 00:08:43.478 15:31:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:08:43.478 15:31:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:08:43.478 { 00:08:43.478 "params": { 00:08:43.478 "name": "Nvme$subsystem", 00:08:43.478 "trtype": "$TEST_TRANSPORT", 00:08:43.478 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:43.478 "adrfam": "ipv4", 00:08:43.478 "trsvcid": "$NVMF_PORT", 00:08:43.478 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:43.478 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:43.478 "hdgst": ${hdgst:-false}, 00:08:43.478 "ddgst": ${ddgst:-false} 00:08:43.478 }, 00:08:43.478 "method": "bdev_nvme_attach_controller" 00:08:43.478 } 00:08:43.478 EOF 00:08:43.478 )") 00:08:43.478 15:31:44 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:43.478 15:31:44 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=66714 00:08:43.478 15:31:44 -- target/bdev_io_wait.sh@35 -- # sync 00:08:43.478 15:31:44 -- nvmf/common.sh@543 -- # cat 00:08:43.478 15:31:44 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:43.478 15:31:44 -- nvmf/common.sh@521 -- # config=() 00:08:43.478 15:31:44 -- nvmf/common.sh@543 -- # cat 00:08:43.478 15:31:44 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:43.478 15:31:44 -- nvmf/common.sh@521 -- # local subsystem config 00:08:43.478 15:31:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:08:43.478 15:31:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:08:43.478 { 00:08:43.478 "params": { 00:08:43.478 "name": "Nvme$subsystem", 00:08:43.478 "trtype": "$TEST_TRANSPORT", 00:08:43.478 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:43.478 "adrfam": "ipv4", 00:08:43.478 "trsvcid": "$NVMF_PORT", 00:08:43.478 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:43.478 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:43.478 "hdgst": ${hdgst:-false}, 00:08:43.478 "ddgst": ${ddgst:-false} 00:08:43.478 }, 00:08:43.478 "method": "bdev_nvme_attach_controller" 00:08:43.478 } 00:08:43.478 EOF 00:08:43.478 )") 00:08:43.478 15:31:44 -- nvmf/common.sh@545 -- # jq . 00:08:43.478 15:31:44 -- nvmf/common.sh@545 -- # jq . 00:08:43.478 15:31:44 -- nvmf/common.sh@546 -- # IFS=, 00:08:43.478 15:31:44 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:08:43.478 "params": { 00:08:43.478 "name": "Nvme1", 00:08:43.478 "trtype": "tcp", 00:08:43.478 "traddr": "10.0.0.2", 00:08:43.478 "adrfam": "ipv4", 00:08:43.478 "trsvcid": "4420", 00:08:43.478 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:43.478 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:43.478 "hdgst": false, 00:08:43.478 "ddgst": false 00:08:43.478 }, 00:08:43.478 "method": "bdev_nvme_attach_controller" 00:08:43.478 }' 00:08:43.478 15:31:44 -- nvmf/common.sh@543 -- # cat 00:08:43.478 15:31:44 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:43.478 15:31:44 -- nvmf/common.sh@546 -- # IFS=, 00:08:43.478 15:31:44 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:08:43.478 "params": { 00:08:43.478 "name": "Nvme1", 00:08:43.478 "trtype": "tcp", 00:08:43.478 "traddr": "10.0.0.2", 00:08:43.478 "adrfam": "ipv4", 00:08:43.478 "trsvcid": "4420", 00:08:43.478 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:43.478 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:43.478 "hdgst": false, 00:08:43.478 "ddgst": false 00:08:43.478 }, 00:08:43.478 "method": "bdev_nvme_attach_controller" 00:08:43.478 }' 00:08:43.478 15:31:44 -- nvmf/common.sh@521 -- # config=() 00:08:43.478 15:31:44 -- nvmf/common.sh@521 -- # local subsystem config 00:08:43.478 15:31:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:08:43.479 15:31:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:08:43.479 { 00:08:43.479 "params": { 00:08:43.479 "name": "Nvme$subsystem", 00:08:43.479 "trtype": "$TEST_TRANSPORT", 00:08:43.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:43.479 "adrfam": "ipv4", 00:08:43.479 "trsvcid": "$NVMF_PORT", 00:08:43.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:43.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:43.479 "hdgst": ${hdgst:-false}, 00:08:43.479 "ddgst": ${ddgst:-false} 00:08:43.479 }, 00:08:43.479 "method": "bdev_nvme_attach_controller" 00:08:43.479 } 00:08:43.479 EOF 00:08:43.479 )") 00:08:43.479 15:31:44 -- nvmf/common.sh@543 -- # cat 00:08:43.479 15:31:44 -- nvmf/common.sh@545 -- # jq . 00:08:43.479 15:31:44 -- nvmf/common.sh@546 -- # IFS=, 00:08:43.479 15:31:44 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:08:43.479 "params": { 00:08:43.479 "name": "Nvme1", 00:08:43.479 "trtype": "tcp", 00:08:43.479 "traddr": "10.0.0.2", 00:08:43.479 "adrfam": "ipv4", 00:08:43.479 "trsvcid": "4420", 00:08:43.479 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:43.479 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:43.479 "hdgst": false, 00:08:43.479 "ddgst": false 00:08:43.479 }, 00:08:43.479 "method": "bdev_nvme_attach_controller" 00:08:43.479 }' 00:08:43.479 15:31:44 -- nvmf/common.sh@545 -- # jq . 00:08:43.479 15:31:44 -- nvmf/common.sh@546 -- # IFS=, 00:08:43.479 15:31:44 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:08:43.479 "params": { 00:08:43.479 "name": "Nvme1", 00:08:43.479 "trtype": "tcp", 00:08:43.479 "traddr": "10.0.0.2", 00:08:43.479 "adrfam": "ipv4", 00:08:43.479 "trsvcid": "4420", 00:08:43.479 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:43.479 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:43.479 "hdgst": false, 00:08:43.479 "ddgst": false 00:08:43.479 }, 00:08:43.479 "method": "bdev_nvme_attach_controller" 00:08:43.479 }' 00:08:43.479 [2024-04-17 15:31:44.903369] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:08:43.479 [2024-04-17 15:31:44.903583] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:43.479 15:31:44 -- target/bdev_io_wait.sh@37 -- # wait 66707 00:08:43.738 [2024-04-17 15:31:44.925695] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:08:43.738 [2024-04-17 15:31:44.925698] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:08:43.738 [2024-04-17 15:31:44.925964] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:43.738 [2024-04-17 15:31:44.927388] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:43.738 [2024-04-17 15:31:44.944353] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:08:43.738 [2024-04-17 15:31:44.945181] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:43.738 [2024-04-17 15:31:45.132930] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.996 [2024-04-17 15:31:45.236922] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.996 [2024-04-17 15:31:45.253853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:08:43.996 [2024-04-17 15:31:45.262766] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:08:43.996 [2024-04-17 15:31:45.328901] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.996 [2024-04-17 15:31:45.347504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:08:43.996 [2024-04-17 15:31:45.356369] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:08:43.996 [2024-04-17 15:31:45.429630] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.255 [2024-04-17 15:31:45.451481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:08:44.255 [2024-04-17 15:31:45.460321] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:08:44.255 [2024-04-17 15:31:45.483595] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:08:44.255 Running I/O for 1 seconds... 00:08:44.255 [2024-04-17 15:31:45.503422] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:08:44.255 Running I/O for 1 seconds... 00:08:44.255 [2024-04-17 15:31:45.549017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:08:44.255 [2024-04-17 15:31:45.557862] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:08:44.255 [2024-04-17 15:31:45.613099] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:08:44.255 Running I/O for 1 seconds... 00:08:44.514 Running I/O for 1 seconds... 00:08:44.514 [2024-04-17 15:31:45.703588] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:08:45.082 00:08:45.082 Latency(us) 00:08:45.082 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:45.082 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:45.083 Nvme1n1 : 1.00 175050.85 683.79 0.00 0.00 728.59 323.96 1057.51 00:08:45.083 =================================================================================================================== 00:08:45.083 Total : 175050.85 683.79 0.00 0.00 728.59 323.96 1057.51 00:08:45.083 00:08:45.083 Latency(us) 00:08:45.083 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:45.083 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:45.083 Nvme1n1 : 1.01 9272.33 36.22 0.00 0.00 13740.81 7387.69 21805.61 00:08:45.083 =================================================================================================================== 00:08:45.083 Total : 9272.33 36.22 0.00 0.00 13740.81 7387.69 21805.61 00:08:45.341 00:08:45.341 Latency(us) 00:08:45.341 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:45.341 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:45.341 Nvme1n1 : 1.01 7310.54 28.56 0.00 0.00 17408.33 10843.23 28001.75 00:08:45.341 =================================================================================================================== 00:08:45.341 Total : 7310.54 28.56 0.00 0.00 17408.33 10843.23 28001.75 00:08:45.341 00:08:45.341 Latency(us) 00:08:45.341 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:45.341 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:45.341 Nvme1n1 : 1.01 9136.72 35.69 0.00 0.00 13951.34 7328.12 26691.03 00:08:45.341 =================================================================================================================== 00:08:45.341 Total : 9136.72 35.69 0.00 0.00 13951.34 7328.12 26691.03 00:08:45.600 15:31:46 -- target/bdev_io_wait.sh@38 -- # wait 66709 00:08:45.600 15:31:46 -- target/bdev_io_wait.sh@39 -- # wait 66711 00:08:45.600 15:31:46 -- target/bdev_io_wait.sh@40 -- # wait 66714 00:08:45.858 15:31:47 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:45.858 15:31:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:45.858 15:31:47 -- common/autotest_common.sh@10 -- # set +x 00:08:45.858 15:31:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:45.858 15:31:47 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:45.858 15:31:47 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:45.858 15:31:47 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:45.858 15:31:47 -- nvmf/common.sh@117 -- # sync 00:08:45.858 15:31:47 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:45.859 15:31:47 -- nvmf/common.sh@120 -- # set +e 00:08:45.859 15:31:47 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:45.859 15:31:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:45.859 rmmod nvme_tcp 00:08:45.859 rmmod nvme_fabrics 00:08:45.859 rmmod nvme_keyring 00:08:45.859 15:31:47 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:45.859 15:31:47 -- nvmf/common.sh@124 -- # set -e 00:08:45.859 15:31:47 -- nvmf/common.sh@125 -- # return 0 00:08:45.859 15:31:47 -- nvmf/common.sh@478 -- # '[' -n 66666 ']' 00:08:45.859 15:31:47 -- nvmf/common.sh@479 -- # killprocess 66666 00:08:45.859 15:31:47 -- common/autotest_common.sh@936 -- # '[' -z 66666 ']' 00:08:45.859 15:31:47 -- common/autotest_common.sh@940 -- # kill -0 66666 00:08:45.859 15:31:47 -- common/autotest_common.sh@941 -- # uname 00:08:45.859 15:31:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:45.859 15:31:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66666 00:08:45.859 15:31:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:45.859 15:31:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:45.859 killing process with pid 66666 00:08:45.859 15:31:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66666' 00:08:45.859 15:31:47 -- common/autotest_common.sh@955 -- # kill 66666 00:08:45.859 15:31:47 -- common/autotest_common.sh@960 -- # wait 66666 00:08:46.117 15:31:47 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:46.117 15:31:47 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:46.117 15:31:47 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:46.117 15:31:47 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:46.117 15:31:47 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:46.117 15:31:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.117 15:31:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:46.118 15:31:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.118 15:31:47 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:46.118 00:08:46.118 real 0m4.406s 00:08:46.118 user 0m19.169s 00:08:46.118 sys 0m2.480s 00:08:46.118 15:31:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:46.118 15:31:47 -- common/autotest_common.sh@10 -- # set +x 00:08:46.118 ************************************ 00:08:46.118 END TEST nvmf_bdev_io_wait 00:08:46.118 ************************************ 00:08:46.377 15:31:47 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:46.377 15:31:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:46.377 15:31:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:46.377 15:31:47 -- common/autotest_common.sh@10 -- # set +x 00:08:46.377 ************************************ 00:08:46.377 START TEST nvmf_queue_depth 00:08:46.377 ************************************ 00:08:46.377 15:31:47 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:46.377 * Looking for test storage... 00:08:46.377 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:46.377 15:31:47 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:46.377 15:31:47 -- nvmf/common.sh@7 -- # uname -s 00:08:46.377 15:31:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:46.377 15:31:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:46.377 15:31:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:46.377 15:31:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:46.377 15:31:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:46.377 15:31:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:46.377 15:31:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:46.377 15:31:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:46.377 15:31:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:46.377 15:31:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:46.377 15:31:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02dfa913-00e4-4a25-ab2c-855f7283d4db 00:08:46.377 15:31:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=02dfa913-00e4-4a25-ab2c-855f7283d4db 00:08:46.377 15:31:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:46.377 15:31:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:46.377 15:31:47 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:46.377 15:31:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:46.377 15:31:47 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:46.377 15:31:47 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:46.377 15:31:47 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:46.377 15:31:47 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:46.377 15:31:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.377 15:31:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.377 15:31:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.377 15:31:47 -- paths/export.sh@5 -- # export PATH 00:08:46.378 15:31:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.378 15:31:47 -- nvmf/common.sh@47 -- # : 0 00:08:46.378 15:31:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:46.378 15:31:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:46.378 15:31:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:46.378 15:31:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:46.378 15:31:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:46.378 15:31:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:46.378 15:31:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:46.378 15:31:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:46.378 15:31:47 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:46.378 15:31:47 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:46.378 15:31:47 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:46.378 15:31:47 -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:46.378 15:31:47 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:46.378 15:31:47 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:46.378 15:31:47 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:46.378 15:31:47 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:46.378 15:31:47 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:46.378 15:31:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.378 15:31:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:46.378 15:31:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.378 15:31:47 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:08:46.378 15:31:47 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:08:46.378 15:31:47 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:08:46.378 15:31:47 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:08:46.378 15:31:47 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:08:46.378 15:31:47 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:08:46.378 15:31:47 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:46.378 15:31:47 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:46.378 15:31:47 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:46.378 15:31:47 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:46.378 15:31:47 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:46.378 15:31:47 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:46.378 15:31:47 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:46.378 15:31:47 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:46.378 15:31:47 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:46.378 15:31:47 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:46.378 15:31:47 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:46.378 15:31:47 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:46.378 15:31:47 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:46.378 15:31:47 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:46.378 Cannot find device "nvmf_tgt_br" 00:08:46.378 15:31:47 -- nvmf/common.sh@155 -- # true 00:08:46.378 15:31:47 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:46.638 Cannot find device "nvmf_tgt_br2" 00:08:46.638 15:31:47 -- nvmf/common.sh@156 -- # true 00:08:46.638 15:31:47 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:46.638 15:31:47 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:46.638 Cannot find device "nvmf_tgt_br" 00:08:46.638 15:31:47 -- nvmf/common.sh@158 -- # true 00:08:46.638 15:31:47 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:46.638 Cannot find device "nvmf_tgt_br2" 00:08:46.638 15:31:47 -- nvmf/common.sh@159 -- # true 00:08:46.638 15:31:47 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:46.638 15:31:47 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:46.638 15:31:47 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:46.638 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:46.638 15:31:47 -- nvmf/common.sh@162 -- # true 00:08:46.638 15:31:47 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:46.638 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:46.638 15:31:47 -- nvmf/common.sh@163 -- # true 00:08:46.638 15:31:47 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:46.638 15:31:47 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:46.638 15:31:47 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:46.638 15:31:47 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:46.638 15:31:47 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:46.638 15:31:47 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:46.638 15:31:47 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:46.638 15:31:47 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:46.638 15:31:47 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:46.638 15:31:47 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:46.638 15:31:47 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:46.638 15:31:47 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:46.638 15:31:47 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:46.638 15:31:48 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:46.638 15:31:48 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:46.638 15:31:48 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:46.638 15:31:48 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:46.638 15:31:48 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:46.638 15:31:48 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:46.638 15:31:48 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:46.638 15:31:48 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:46.638 15:31:48 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:46.638 15:31:48 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:46.638 15:31:48 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:46.638 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:46.638 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:08:46.638 00:08:46.638 --- 10.0.0.2 ping statistics --- 00:08:46.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.638 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:08:46.638 15:31:48 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:46.638 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:46.638 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:08:46.638 00:08:46.638 --- 10.0.0.3 ping statistics --- 00:08:46.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.638 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:08:46.638 15:31:48 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:46.638 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:46.638 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:08:46.638 00:08:46.638 --- 10.0.0.1 ping statistics --- 00:08:46.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.638 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:08:46.897 15:31:48 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:46.897 15:31:48 -- nvmf/common.sh@422 -- # return 0 00:08:46.897 15:31:48 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:46.897 15:31:48 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:46.897 15:31:48 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:46.897 15:31:48 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:46.897 15:31:48 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:46.897 15:31:48 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:46.897 15:31:48 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:46.897 15:31:48 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:46.897 15:31:48 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:46.897 15:31:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:46.897 15:31:48 -- common/autotest_common.sh@10 -- # set +x 00:08:46.897 15:31:48 -- nvmf/common.sh@470 -- # nvmfpid=66954 00:08:46.897 15:31:48 -- nvmf/common.sh@471 -- # waitforlisten 66954 00:08:46.897 15:31:48 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:46.897 15:31:48 -- common/autotest_common.sh@817 -- # '[' -z 66954 ']' 00:08:46.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.897 15:31:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.897 15:31:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:46.897 15:31:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.897 15:31:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:46.897 15:31:48 -- common/autotest_common.sh@10 -- # set +x 00:08:46.897 [2024-04-17 15:31:48.167174] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:08:46.897 [2024-04-17 15:31:48.167324] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:46.897 [2024-04-17 15:31:48.309050] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.156 [2024-04-17 15:31:48.434407] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:47.156 [2024-04-17 15:31:48.434473] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:47.156 [2024-04-17 15:31:48.434500] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:47.156 [2024-04-17 15:31:48.434519] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:47.156 [2024-04-17 15:31:48.434527] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:47.156 [2024-04-17 15:31:48.434555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:47.734 15:31:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:47.734 15:31:49 -- common/autotest_common.sh@850 -- # return 0 00:08:47.734 15:31:49 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:47.734 15:31:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:47.734 15:31:49 -- common/autotest_common.sh@10 -- # set +x 00:08:47.734 15:31:49 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:47.734 15:31:49 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:47.734 15:31:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:47.734 15:31:49 -- common/autotest_common.sh@10 -- # set +x 00:08:47.734 [2024-04-17 15:31:49.127873] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:47.734 15:31:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:47.734 15:31:49 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:47.734 15:31:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:47.734 15:31:49 -- common/autotest_common.sh@10 -- # set +x 00:08:48.007 Malloc0 00:08:48.007 15:31:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:48.007 15:31:49 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:48.007 15:31:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:48.007 15:31:49 -- common/autotest_common.sh@10 -- # set +x 00:08:48.007 15:31:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:48.007 15:31:49 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:48.007 15:31:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:48.007 15:31:49 -- common/autotest_common.sh@10 -- # set +x 00:08:48.007 15:31:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:48.007 15:31:49 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:48.007 15:31:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:48.007 15:31:49 -- common/autotest_common.sh@10 -- # set +x 00:08:48.007 [2024-04-17 15:31:49.194285] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:48.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:48.007 15:31:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:48.007 15:31:49 -- target/queue_depth.sh@30 -- # bdevperf_pid=66986 00:08:48.007 15:31:49 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:48.007 15:31:49 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:48.007 15:31:49 -- target/queue_depth.sh@33 -- # waitforlisten 66986 /var/tmp/bdevperf.sock 00:08:48.007 15:31:49 -- common/autotest_common.sh@817 -- # '[' -z 66986 ']' 00:08:48.007 15:31:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:48.007 15:31:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:48.007 15:31:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:48.007 15:31:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:48.007 15:31:49 -- common/autotest_common.sh@10 -- # set +x 00:08:48.007 [2024-04-17 15:31:49.252382] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:08:48.007 [2024-04-17 15:31:49.252955] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66986 ] 00:08:48.007 [2024-04-17 15:31:49.395025] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.266 [2024-04-17 15:31:49.541055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.834 15:31:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:48.834 15:31:50 -- common/autotest_common.sh@850 -- # return 0 00:08:48.834 15:31:50 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:48.834 15:31:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:48.834 15:31:50 -- common/autotest_common.sh@10 -- # set +x 00:08:49.093 NVMe0n1 00:08:49.093 15:31:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:49.093 15:31:50 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:49.093 Running I/O for 10 seconds... 00:09:01.303 00:09:01.303 Latency(us) 00:09:01.303 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:01.303 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:01.303 Verification LBA range: start 0x0 length 0x4000 00:09:01.303 NVMe0n1 : 10.09 8622.08 33.68 0.00 0.00 118235.65 27644.28 88175.71 00:09:01.303 =================================================================================================================== 00:09:01.303 Total : 8622.08 33.68 0.00 0.00 118235.65 27644.28 88175.71 00:09:01.303 0 00:09:01.303 15:32:00 -- target/queue_depth.sh@39 -- # killprocess 66986 00:09:01.303 15:32:00 -- common/autotest_common.sh@936 -- # '[' -z 66986 ']' 00:09:01.303 15:32:00 -- common/autotest_common.sh@940 -- # kill -0 66986 00:09:01.303 15:32:00 -- common/autotest_common.sh@941 -- # uname 00:09:01.303 15:32:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:01.303 15:32:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66986 00:09:01.303 killing process with pid 66986 00:09:01.303 Received shutdown signal, test time was about 10.000000 seconds 00:09:01.303 00:09:01.303 Latency(us) 00:09:01.303 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:01.303 =================================================================================================================== 00:09:01.303 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:01.303 15:32:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:01.303 15:32:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:01.304 15:32:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66986' 00:09:01.304 15:32:00 -- common/autotest_common.sh@955 -- # kill 66986 00:09:01.304 15:32:00 -- common/autotest_common.sh@960 -- # wait 66986 00:09:01.304 15:32:00 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:01.304 15:32:00 -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:01.304 15:32:00 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:01.304 15:32:00 -- nvmf/common.sh@117 -- # sync 00:09:01.304 15:32:00 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:01.304 15:32:00 -- nvmf/common.sh@120 -- # set +e 00:09:01.304 15:32:00 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:01.304 15:32:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:01.304 rmmod nvme_tcp 00:09:01.304 rmmod nvme_fabrics 00:09:01.304 rmmod nvme_keyring 00:09:01.304 15:32:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:01.304 15:32:01 -- nvmf/common.sh@124 -- # set -e 00:09:01.304 15:32:01 -- nvmf/common.sh@125 -- # return 0 00:09:01.304 15:32:01 -- nvmf/common.sh@478 -- # '[' -n 66954 ']' 00:09:01.304 15:32:01 -- nvmf/common.sh@479 -- # killprocess 66954 00:09:01.304 15:32:01 -- common/autotest_common.sh@936 -- # '[' -z 66954 ']' 00:09:01.304 15:32:01 -- common/autotest_common.sh@940 -- # kill -0 66954 00:09:01.304 15:32:01 -- common/autotest_common.sh@941 -- # uname 00:09:01.304 15:32:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:01.304 15:32:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66954 00:09:01.304 killing process with pid 66954 00:09:01.304 15:32:01 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:01.304 15:32:01 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:01.304 15:32:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66954' 00:09:01.304 15:32:01 -- common/autotest_common.sh@955 -- # kill 66954 00:09:01.304 15:32:01 -- common/autotest_common.sh@960 -- # wait 66954 00:09:01.304 15:32:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:01.304 15:32:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:01.304 15:32:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:01.304 15:32:01 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:01.304 15:32:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:01.304 15:32:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.304 15:32:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:01.304 15:32:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.304 15:32:01 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:01.304 00:09:01.304 real 0m13.782s 00:09:01.304 user 0m23.823s 00:09:01.304 sys 0m2.219s 00:09:01.304 ************************************ 00:09:01.304 END TEST nvmf_queue_depth 00:09:01.304 ************************************ 00:09:01.304 15:32:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:01.304 15:32:01 -- common/autotest_common.sh@10 -- # set +x 00:09:01.304 15:32:01 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:01.304 15:32:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:01.304 15:32:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:01.304 15:32:01 -- common/autotest_common.sh@10 -- # set +x 00:09:01.304 ************************************ 00:09:01.304 START TEST nvmf_multipath 00:09:01.304 ************************************ 00:09:01.304 15:32:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:01.304 * Looking for test storage... 00:09:01.304 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:01.304 15:32:01 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:01.304 15:32:01 -- nvmf/common.sh@7 -- # uname -s 00:09:01.304 15:32:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:01.304 15:32:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:01.304 15:32:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:01.304 15:32:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:01.304 15:32:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:01.304 15:32:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:01.304 15:32:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:01.304 15:32:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:01.304 15:32:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:01.304 15:32:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:01.304 15:32:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02dfa913-00e4-4a25-ab2c-855f7283d4db 00:09:01.304 15:32:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=02dfa913-00e4-4a25-ab2c-855f7283d4db 00:09:01.304 15:32:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:01.304 15:32:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:01.304 15:32:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:01.304 15:32:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:01.304 15:32:01 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:01.304 15:32:01 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.304 15:32:01 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.304 15:32:01 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.304 15:32:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.304 15:32:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.304 15:32:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.304 15:32:01 -- paths/export.sh@5 -- # export PATH 00:09:01.304 15:32:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.304 15:32:01 -- nvmf/common.sh@47 -- # : 0 00:09:01.304 15:32:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:01.304 15:32:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:01.304 15:32:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:01.304 15:32:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:01.304 15:32:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:01.304 15:32:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:01.304 15:32:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:01.304 15:32:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:01.304 15:32:01 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:01.304 15:32:01 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:01.304 15:32:01 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:01.304 15:32:01 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:01.304 15:32:01 -- target/multipath.sh@43 -- # nvmftestinit 00:09:01.304 15:32:01 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:01.304 15:32:01 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:01.304 15:32:01 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:01.304 15:32:01 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:01.304 15:32:01 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:01.304 15:32:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.304 15:32:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:01.304 15:32:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.304 15:32:01 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:09:01.304 15:32:01 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:09:01.304 15:32:01 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:09:01.304 15:32:01 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:09:01.304 15:32:01 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:09:01.304 15:32:01 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:09:01.304 15:32:01 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:01.304 15:32:01 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:01.304 15:32:01 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:01.304 15:32:01 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:01.304 15:32:01 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:01.304 15:32:01 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:01.304 15:32:01 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:01.304 15:32:01 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:01.304 15:32:01 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:01.304 15:32:01 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:01.304 15:32:01 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:01.304 15:32:01 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:01.304 15:32:01 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:01.304 15:32:01 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:01.304 Cannot find device "nvmf_tgt_br" 00:09:01.304 15:32:01 -- nvmf/common.sh@155 -- # true 00:09:01.304 15:32:01 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:01.304 Cannot find device "nvmf_tgt_br2" 00:09:01.304 15:32:01 -- nvmf/common.sh@156 -- # true 00:09:01.304 15:32:01 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:01.304 15:32:01 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:01.304 Cannot find device "nvmf_tgt_br" 00:09:01.304 15:32:01 -- nvmf/common.sh@158 -- # true 00:09:01.304 15:32:01 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:01.304 Cannot find device "nvmf_tgt_br2" 00:09:01.304 15:32:01 -- nvmf/common.sh@159 -- # true 00:09:01.304 15:32:01 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:01.304 15:32:01 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:01.305 15:32:01 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:01.305 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:01.305 15:32:01 -- nvmf/common.sh@162 -- # true 00:09:01.305 15:32:01 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:01.305 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:01.305 15:32:01 -- nvmf/common.sh@163 -- # true 00:09:01.305 15:32:01 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:01.305 15:32:01 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:01.305 15:32:01 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:01.305 15:32:01 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:01.305 15:32:01 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:01.305 15:32:01 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:01.305 15:32:01 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:01.305 15:32:01 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:01.305 15:32:01 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:01.305 15:32:01 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:01.305 15:32:01 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:01.305 15:32:01 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:01.305 15:32:01 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:01.305 15:32:01 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:01.305 15:32:01 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:01.305 15:32:01 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:01.305 15:32:01 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:01.305 15:32:01 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:01.305 15:32:01 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:01.305 15:32:01 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:01.305 15:32:01 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:01.305 15:32:01 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:01.305 15:32:01 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:01.305 15:32:01 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:01.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:01.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:09:01.305 00:09:01.305 --- 10.0.0.2 ping statistics --- 00:09:01.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.305 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:09:01.305 15:32:02 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:01.305 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:01.305 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.035 ms 00:09:01.305 00:09:01.305 --- 10.0.0.3 ping statistics --- 00:09:01.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.305 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:09:01.305 15:32:02 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:01.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:01.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:09:01.305 00:09:01.305 --- 10.0.0.1 ping statistics --- 00:09:01.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.305 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:09:01.305 15:32:02 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:01.305 15:32:02 -- nvmf/common.sh@422 -- # return 0 00:09:01.305 15:32:02 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:01.305 15:32:02 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:01.305 15:32:02 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:01.305 15:32:02 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:01.305 15:32:02 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:01.305 15:32:02 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:01.305 15:32:02 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:01.305 15:32:02 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:09:01.305 15:32:02 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:01.305 15:32:02 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:01.305 15:32:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:01.305 15:32:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:01.305 15:32:02 -- common/autotest_common.sh@10 -- # set +x 00:09:01.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.305 15:32:02 -- nvmf/common.sh@470 -- # nvmfpid=67308 00:09:01.305 15:32:02 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:01.305 15:32:02 -- nvmf/common.sh@471 -- # waitforlisten 67308 00:09:01.305 15:32:02 -- common/autotest_common.sh@817 -- # '[' -z 67308 ']' 00:09:01.305 15:32:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.305 15:32:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:01.305 15:32:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.305 15:32:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:01.305 15:32:02 -- common/autotest_common.sh@10 -- # set +x 00:09:01.305 [2024-04-17 15:32:02.090339] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:09:01.305 [2024-04-17 15:32:02.090438] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:01.305 [2024-04-17 15:32:02.231328] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:01.305 [2024-04-17 15:32:02.364607] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:01.305 [2024-04-17 15:32:02.364963] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:01.305 [2024-04-17 15:32:02.365092] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:01.305 [2024-04-17 15:32:02.365209] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:01.305 [2024-04-17 15:32:02.365243] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:01.305 [2024-04-17 15:32:02.365494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.305 [2024-04-17 15:32:02.365637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:01.305 [2024-04-17 15:32:02.365746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.305 [2024-04-17 15:32:02.365747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:01.873 15:32:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:01.873 15:32:03 -- common/autotest_common.sh@850 -- # return 0 00:09:01.873 15:32:03 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:01.873 15:32:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:01.873 15:32:03 -- common/autotest_common.sh@10 -- # set +x 00:09:01.873 15:32:03 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:01.873 15:32:03 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:02.131 [2024-04-17 15:32:03.362089] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:02.131 15:32:03 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:02.398 Malloc0 00:09:02.398 15:32:03 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:02.673 15:32:03 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:02.931 15:32:04 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:03.189 [2024-04-17 15:32:04.412328] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:03.189 15:32:04 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:03.447 [2024-04-17 15:32:04.636537] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:03.447 15:32:04 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02dfa913-00e4-4a25-ab2c-855f7283d4db --hostid=02dfa913-00e4-4a25-ab2c-855f7283d4db -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:09:03.447 15:32:04 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02dfa913-00e4-4a25-ab2c-855f7283d4db --hostid=02dfa913-00e4-4a25-ab2c-855f7283d4db -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:09:03.705 15:32:04 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:09:03.705 15:32:04 -- common/autotest_common.sh@1184 -- # local i=0 00:09:03.705 15:32:04 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:03.705 15:32:04 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:03.705 15:32:04 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:05.610 15:32:06 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:05.610 15:32:06 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:05.610 15:32:06 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:05.610 15:32:06 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:05.610 15:32:06 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:05.610 15:32:06 -- common/autotest_common.sh@1194 -- # return 0 00:09:05.610 15:32:06 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:09:05.610 15:32:06 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:09:05.610 15:32:06 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:09:05.610 15:32:06 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:05.610 15:32:06 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:09:05.610 15:32:06 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:09:05.610 15:32:06 -- target/multipath.sh@38 -- # return 0 00:09:05.610 15:32:06 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:09:05.610 15:32:06 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:09:05.610 15:32:06 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:09:05.610 15:32:06 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:09:05.610 15:32:06 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:09:05.610 15:32:06 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:09:05.610 15:32:06 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:09:05.610 15:32:06 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:05.610 15:32:06 -- target/multipath.sh@22 -- # local timeout=20 00:09:05.610 15:32:06 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:05.610 15:32:06 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:05.610 15:32:06 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:05.610 15:32:06 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:09:05.610 15:32:06 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:05.610 15:32:06 -- target/multipath.sh@22 -- # local timeout=20 00:09:05.610 15:32:06 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:05.610 15:32:06 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:05.610 15:32:06 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:05.610 15:32:06 -- target/multipath.sh@85 -- # echo numa 00:09:05.610 15:32:06 -- target/multipath.sh@88 -- # fio_pid=67403 00:09:05.610 15:32:06 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:05.610 15:32:06 -- target/multipath.sh@90 -- # sleep 1 00:09:05.610 [global] 00:09:05.610 thread=1 00:09:05.610 invalidate=1 00:09:05.610 rw=randrw 00:09:05.610 time_based=1 00:09:05.610 runtime=6 00:09:05.610 ioengine=libaio 00:09:05.610 direct=1 00:09:05.610 bs=4096 00:09:05.610 iodepth=128 00:09:05.610 norandommap=0 00:09:05.610 numjobs=1 00:09:05.610 00:09:05.610 verify_dump=1 00:09:05.610 verify_backlog=512 00:09:05.610 verify_state_save=0 00:09:05.610 do_verify=1 00:09:05.610 verify=crc32c-intel 00:09:05.610 [job0] 00:09:05.610 filename=/dev/nvme0n1 00:09:05.610 Could not set queue depth (nvme0n1) 00:09:05.870 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:05.870 fio-3.35 00:09:05.870 Starting 1 thread 00:09:06.807 15:32:07 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:06.807 15:32:08 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:07.067 15:32:08 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:09:07.067 15:32:08 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:07.067 15:32:08 -- target/multipath.sh@22 -- # local timeout=20 00:09:07.067 15:32:08 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:07.067 15:32:08 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:07.067 15:32:08 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:07.067 15:32:08 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:09:07.067 15:32:08 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:07.067 15:32:08 -- target/multipath.sh@22 -- # local timeout=20 00:09:07.067 15:32:08 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:07.067 15:32:08 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:07.067 15:32:08 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:07.067 15:32:08 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:07.635 15:32:08 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:07.635 15:32:09 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:09:07.635 15:32:09 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:07.635 15:32:09 -- target/multipath.sh@22 -- # local timeout=20 00:09:07.635 15:32:09 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:07.635 15:32:09 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:07.635 15:32:09 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:07.635 15:32:09 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:09:07.635 15:32:09 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:07.635 15:32:09 -- target/multipath.sh@22 -- # local timeout=20 00:09:07.635 15:32:09 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:07.635 15:32:09 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:07.635 15:32:09 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:07.635 15:32:09 -- target/multipath.sh@104 -- # wait 67403 00:09:11.826 00:09:11.826 job0: (groupid=0, jobs=1): err= 0: pid=67424: Wed Apr 17 15:32:13 2024 00:09:11.826 read: IOPS=10.2k, BW=39.7MiB/s (41.7MB/s)(239MiB/6007msec) 00:09:11.826 slat (usec): min=2, max=7921, avg=58.10, stdev=241.61 00:09:11.826 clat (usec): min=1582, max=17198, avg=8600.00, stdev=1544.63 00:09:11.826 lat (usec): min=1592, max=17208, avg=8658.10, stdev=1550.11 00:09:11.826 clat percentiles (usec): 00:09:11.826 | 1.00th=[ 4424], 5.00th=[ 6390], 10.00th=[ 7177], 20.00th=[ 7701], 00:09:11.826 | 30.00th=[ 8029], 40.00th=[ 8291], 50.00th=[ 8455], 60.00th=[ 8717], 00:09:11.826 | 70.00th=[ 8979], 80.00th=[ 9241], 90.00th=[10028], 95.00th=[11994], 00:09:11.826 | 99.00th=[13698], 99.50th=[14091], 99.90th=[15008], 99.95th=[15401], 00:09:11.826 | 99.99th=[16319] 00:09:11.826 bw ( KiB/s): min= 5632, max=27808, per=50.94%, avg=20725.82, stdev=7229.76, samples=11 00:09:11.826 iops : min= 1408, max= 6952, avg=5181.45, stdev=1807.44, samples=11 00:09:11.826 write: IOPS=5990, BW=23.4MiB/s (24.5MB/s)(123MiB/5252msec); 0 zone resets 00:09:11.826 slat (usec): min=4, max=6275, avg=67.53, stdev=174.66 00:09:11.826 clat (usec): min=2141, max=15564, avg=7515.44, stdev=1393.53 00:09:11.826 lat (usec): min=2166, max=15597, avg=7582.97, stdev=1397.81 00:09:11.826 clat percentiles (usec): 00:09:11.826 | 1.00th=[ 3392], 5.00th=[ 4424], 10.00th=[ 5866], 20.00th=[ 6915], 00:09:11.826 | 30.00th=[ 7242], 40.00th=[ 7439], 50.00th=[ 7701], 60.00th=[ 7898], 00:09:11.826 | 70.00th=[ 8094], 80.00th=[ 8356], 90.00th=[ 8717], 95.00th=[ 9110], 00:09:11.826 | 99.00th=[11731], 99.50th=[12387], 99.90th=[13960], 99.95th=[14353], 00:09:11.826 | 99.99th=[15533] 00:09:11.826 bw ( KiB/s): min= 5928, max=27448, per=86.84%, avg=20810.91, stdev=7069.43, samples=11 00:09:11.826 iops : min= 1482, max= 6862, avg=5202.73, stdev=1767.36, samples=11 00:09:11.826 lat (msec) : 2=0.01%, 4=1.45%, 10=91.20%, 20=7.34% 00:09:11.826 cpu : usr=5.24%, sys=20.45%, ctx=5304, majf=0, minf=108 00:09:11.826 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:11.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.826 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:11.826 issued rwts: total=61103,31464,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:11.826 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:11.826 00:09:11.826 Run status group 0 (all jobs): 00:09:11.826 READ: bw=39.7MiB/s (41.7MB/s), 39.7MiB/s-39.7MiB/s (41.7MB/s-41.7MB/s), io=239MiB (250MB), run=6007-6007msec 00:09:11.826 WRITE: bw=23.4MiB/s (24.5MB/s), 23.4MiB/s-23.4MiB/s (24.5MB/s-24.5MB/s), io=123MiB (129MB), run=5252-5252msec 00:09:11.826 00:09:11.826 Disk stats (read/write): 00:09:11.826 nvme0n1: ios=60484/30606, merge=0/0, ticks=498054/215308, in_queue=713362, util=98.70% 00:09:11.826 15:32:13 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:09:12.393 15:32:13 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:09:12.393 15:32:13 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:09:12.393 15:32:13 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:12.393 15:32:13 -- target/multipath.sh@22 -- # local timeout=20 00:09:12.393 15:32:13 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:12.393 15:32:13 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:12.393 15:32:13 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:12.393 15:32:13 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:09:12.393 15:32:13 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:12.393 15:32:13 -- target/multipath.sh@22 -- # local timeout=20 00:09:12.393 15:32:13 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:12.393 15:32:13 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:12.393 15:32:13 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:12.393 15:32:13 -- target/multipath.sh@113 -- # echo round-robin 00:09:12.393 15:32:13 -- target/multipath.sh@116 -- # fio_pid=67505 00:09:12.393 15:32:13 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:12.393 15:32:13 -- target/multipath.sh@118 -- # sleep 1 00:09:12.393 [global] 00:09:12.393 thread=1 00:09:12.393 invalidate=1 00:09:12.393 rw=randrw 00:09:12.393 time_based=1 00:09:12.393 runtime=6 00:09:12.393 ioengine=libaio 00:09:12.393 direct=1 00:09:12.393 bs=4096 00:09:12.393 iodepth=128 00:09:12.393 norandommap=0 00:09:12.393 numjobs=1 00:09:12.393 00:09:12.393 verify_dump=1 00:09:12.393 verify_backlog=512 00:09:12.393 verify_state_save=0 00:09:12.393 do_verify=1 00:09:12.393 verify=crc32c-intel 00:09:12.393 [job0] 00:09:12.393 filename=/dev/nvme0n1 00:09:12.393 Could not set queue depth (nvme0n1) 00:09:12.652 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:12.652 fio-3.35 00:09:12.652 Starting 1 thread 00:09:13.586 15:32:14 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:13.844 15:32:15 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:14.103 15:32:15 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:09:14.103 15:32:15 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:14.103 15:32:15 -- target/multipath.sh@22 -- # local timeout=20 00:09:14.103 15:32:15 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:14.103 15:32:15 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:14.103 15:32:15 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:14.103 15:32:15 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:09:14.103 15:32:15 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:14.103 15:32:15 -- target/multipath.sh@22 -- # local timeout=20 00:09:14.103 15:32:15 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:14.103 15:32:15 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:14.103 15:32:15 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:14.103 15:32:15 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:14.103 15:32:15 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:14.362 15:32:15 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:09:14.362 15:32:15 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:14.362 15:32:15 -- target/multipath.sh@22 -- # local timeout=20 00:09:14.362 15:32:15 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:14.362 15:32:15 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:14.362 15:32:15 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:14.362 15:32:15 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:09:14.362 15:32:15 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:14.362 15:32:15 -- target/multipath.sh@22 -- # local timeout=20 00:09:14.362 15:32:15 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:14.362 15:32:15 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:14.362 15:32:15 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:14.362 15:32:15 -- target/multipath.sh@132 -- # wait 67505 00:09:19.690 00:09:19.690 job0: (groupid=0, jobs=1): err= 0: pid=67526: Wed Apr 17 15:32:20 2024 00:09:19.690 read: IOPS=9832, BW=38.4MiB/s (40.3MB/s)(231MiB/6003msec) 00:09:19.690 slat (usec): min=4, max=6699, avg=52.38, stdev=222.96 00:09:19.690 clat (usec): min=329, max=22591, avg=8993.25, stdev=2658.12 00:09:19.690 lat (usec): min=352, max=22601, avg=9045.63, stdev=2663.27 00:09:19.690 clat percentiles (usec): 00:09:19.690 | 1.00th=[ 1909], 5.00th=[ 4113], 10.00th=[ 5800], 20.00th=[ 7767], 00:09:19.690 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9241], 00:09:19.690 | 70.00th=[ 9503], 80.00th=[10028], 90.00th=[12518], 95.00th=[13829], 00:09:19.690 | 99.00th=[17433], 99.50th=[19006], 99.90th=[20579], 99.95th=[21103], 00:09:19.690 | 99.99th=[22152] 00:09:19.690 bw ( KiB/s): min= 5600, max=28168, per=50.95%, avg=20040.00, stdev=6974.89, samples=11 00:09:19.690 iops : min= 1400, max= 7042, avg=5010.00, stdev=1743.72, samples=11 00:09:19.690 write: IOPS=5934, BW=23.2MiB/s (24.3MB/s)(119MiB/5134msec); 0 zone resets 00:09:19.690 slat (usec): min=12, max=1972, avg=59.89, stdev=151.46 00:09:19.690 clat (usec): min=701, max=19520, avg=7588.09, stdev=2413.56 00:09:19.690 lat (usec): min=727, max=19544, avg=7647.98, stdev=2421.04 00:09:19.690 clat percentiles (usec): 00:09:19.690 | 1.00th=[ 1926], 5.00th=[ 2933], 10.00th=[ 3884], 20.00th=[ 5407], 00:09:19.690 | 30.00th=[ 7308], 40.00th=[ 7767], 50.00th=[ 8029], 60.00th=[ 8356], 00:09:19.690 | 70.00th=[ 8586], 80.00th=[ 8979], 90.00th=[ 9765], 95.00th=[11469], 00:09:19.690 | 99.00th=[13435], 99.50th=[14353], 99.90th=[18482], 99.95th=[18744], 00:09:19.690 | 99.99th=[19006] 00:09:19.690 bw ( KiB/s): min= 5984, max=27848, per=84.66%, avg=20098.91, stdev=6765.85, samples=11 00:09:19.690 iops : min= 1496, max= 6962, avg=5024.73, stdev=1691.46, samples=11 00:09:19.690 lat (usec) : 500=0.03%, 750=0.05%, 1000=0.09% 00:09:19.690 lat (msec) : 2=0.97%, 4=5.61%, 10=76.79%, 20=16.30%, 50=0.16% 00:09:19.690 cpu : usr=5.33%, sys=21.17%, ctx=5407, majf=0, minf=114 00:09:19.690 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:19.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.690 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:19.690 issued rwts: total=59027,30470,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:19.690 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:19.690 00:09:19.690 Run status group 0 (all jobs): 00:09:19.690 READ: bw=38.4MiB/s (40.3MB/s), 38.4MiB/s-38.4MiB/s (40.3MB/s-40.3MB/s), io=231MiB (242MB), run=6003-6003msec 00:09:19.690 WRITE: bw=23.2MiB/s (24.3MB/s), 23.2MiB/s-23.2MiB/s (24.3MB/s-24.3MB/s), io=119MiB (125MB), run=5134-5134msec 00:09:19.690 00:09:19.690 Disk stats (read/write): 00:09:19.690 nvme0n1: ios=58132/29958, merge=0/0, ticks=502683/214671, in_queue=717354, util=98.68% 00:09:19.690 15:32:20 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:19.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:19.690 15:32:20 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:19.690 15:32:20 -- common/autotest_common.sh@1205 -- # local i=0 00:09:19.690 15:32:20 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:19.690 15:32:20 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:19.690 15:32:20 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:19.690 15:32:20 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:19.690 15:32:20 -- common/autotest_common.sh@1217 -- # return 0 00:09:19.690 15:32:20 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:19.690 15:32:20 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:19.690 15:32:20 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:19.690 15:32:20 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:19.690 15:32:20 -- target/multipath.sh@144 -- # nvmftestfini 00:09:19.690 15:32:20 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:19.690 15:32:20 -- nvmf/common.sh@117 -- # sync 00:09:19.690 15:32:20 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:19.690 15:32:20 -- nvmf/common.sh@120 -- # set +e 00:09:19.690 15:32:20 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:19.690 15:32:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:19.690 rmmod nvme_tcp 00:09:19.690 rmmod nvme_fabrics 00:09:19.690 rmmod nvme_keyring 00:09:19.690 15:32:20 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:19.690 15:32:20 -- nvmf/common.sh@124 -- # set -e 00:09:19.690 15:32:20 -- nvmf/common.sh@125 -- # return 0 00:09:19.690 15:32:20 -- nvmf/common.sh@478 -- # '[' -n 67308 ']' 00:09:19.690 15:32:20 -- nvmf/common.sh@479 -- # killprocess 67308 00:09:19.690 15:32:20 -- common/autotest_common.sh@936 -- # '[' -z 67308 ']' 00:09:19.690 15:32:20 -- common/autotest_common.sh@940 -- # kill -0 67308 00:09:19.690 15:32:20 -- common/autotest_common.sh@941 -- # uname 00:09:19.690 15:32:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:19.690 15:32:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67308 00:09:19.690 killing process with pid 67308 00:09:19.690 15:32:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:19.690 15:32:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:19.690 15:32:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67308' 00:09:19.690 15:32:20 -- common/autotest_common.sh@955 -- # kill 67308 00:09:19.690 15:32:20 -- common/autotest_common.sh@960 -- # wait 67308 00:09:19.690 15:32:20 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:19.690 15:32:20 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:19.690 15:32:20 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:19.690 15:32:20 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:19.690 15:32:20 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:19.690 15:32:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.690 15:32:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:19.690 15:32:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.690 15:32:20 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:19.690 ************************************ 00:09:19.690 END TEST nvmf_multipath 00:09:19.690 ************************************ 00:09:19.690 00:09:19.690 real 0m19.318s 00:09:19.690 user 1m12.581s 00:09:19.690 sys 0m9.057s 00:09:19.690 15:32:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:19.690 15:32:20 -- common/autotest_common.sh@10 -- # set +x 00:09:19.690 15:32:20 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:19.690 15:32:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:19.690 15:32:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:19.690 15:32:20 -- common/autotest_common.sh@10 -- # set +x 00:09:19.690 ************************************ 00:09:19.690 START TEST nvmf_zcopy 00:09:19.690 ************************************ 00:09:19.690 15:32:20 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:19.690 * Looking for test storage... 00:09:19.690 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:19.690 15:32:21 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:19.690 15:32:21 -- nvmf/common.sh@7 -- # uname -s 00:09:19.690 15:32:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:19.690 15:32:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:19.690 15:32:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:19.690 15:32:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:19.690 15:32:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:19.690 15:32:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:19.690 15:32:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:19.690 15:32:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:19.690 15:32:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:19.690 15:32:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:19.690 15:32:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02dfa913-00e4-4a25-ab2c-855f7283d4db 00:09:19.690 15:32:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=02dfa913-00e4-4a25-ab2c-855f7283d4db 00:09:19.690 15:32:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:19.690 15:32:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:19.690 15:32:21 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:19.690 15:32:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:19.690 15:32:21 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:19.690 15:32:21 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:19.690 15:32:21 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:19.690 15:32:21 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:19.690 15:32:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.690 15:32:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.691 15:32:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.691 15:32:21 -- paths/export.sh@5 -- # export PATH 00:09:19.691 15:32:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.691 15:32:21 -- nvmf/common.sh@47 -- # : 0 00:09:19.691 15:32:21 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:19.691 15:32:21 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:19.691 15:32:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:19.691 15:32:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:19.691 15:32:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:19.691 15:32:21 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:19.691 15:32:21 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:19.691 15:32:21 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:19.691 15:32:21 -- target/zcopy.sh@12 -- # nvmftestinit 00:09:19.691 15:32:21 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:19.691 15:32:21 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:19.691 15:32:21 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:19.691 15:32:21 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:19.691 15:32:21 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:19.691 15:32:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.691 15:32:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:19.691 15:32:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.691 15:32:21 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:09:19.691 15:32:21 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:09:19.691 15:32:21 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:09:19.691 15:32:21 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:09:19.691 15:32:21 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:09:19.691 15:32:21 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:09:19.691 15:32:21 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:19.691 15:32:21 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:19.691 15:32:21 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:19.691 15:32:21 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:19.691 15:32:21 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:19.691 15:32:21 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:19.691 15:32:21 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:19.691 15:32:21 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:19.691 15:32:21 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:19.691 15:32:21 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:19.691 15:32:21 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:19.691 15:32:21 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:19.691 15:32:21 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:19.691 15:32:21 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:19.950 Cannot find device "nvmf_tgt_br" 00:09:19.950 15:32:21 -- nvmf/common.sh@155 -- # true 00:09:19.950 15:32:21 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:19.950 Cannot find device "nvmf_tgt_br2" 00:09:19.950 15:32:21 -- nvmf/common.sh@156 -- # true 00:09:19.950 15:32:21 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:19.950 15:32:21 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:19.950 Cannot find device "nvmf_tgt_br" 00:09:19.950 15:32:21 -- nvmf/common.sh@158 -- # true 00:09:19.950 15:32:21 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:19.950 Cannot find device "nvmf_tgt_br2" 00:09:19.950 15:32:21 -- nvmf/common.sh@159 -- # true 00:09:19.950 15:32:21 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:19.950 15:32:21 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:19.950 15:32:21 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:19.950 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:19.950 15:32:21 -- nvmf/common.sh@162 -- # true 00:09:19.950 15:32:21 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:19.950 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:19.950 15:32:21 -- nvmf/common.sh@163 -- # true 00:09:19.950 15:32:21 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:19.950 15:32:21 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:19.950 15:32:21 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:19.950 15:32:21 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:19.950 15:32:21 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:19.950 15:32:21 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:19.950 15:32:21 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:19.950 15:32:21 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:19.950 15:32:21 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:19.950 15:32:21 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:19.950 15:32:21 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:19.950 15:32:21 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:19.950 15:32:21 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:19.950 15:32:21 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:19.950 15:32:21 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:19.950 15:32:21 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:19.950 15:32:21 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:19.950 15:32:21 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:19.950 15:32:21 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:19.950 15:32:21 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:19.950 15:32:21 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:20.209 15:32:21 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:20.209 15:32:21 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:20.209 15:32:21 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:20.209 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:20.209 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:09:20.209 00:09:20.209 --- 10.0.0.2 ping statistics --- 00:09:20.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.209 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:09:20.209 15:32:21 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:20.209 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:20.209 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:09:20.209 00:09:20.209 --- 10.0.0.3 ping statistics --- 00:09:20.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.209 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:09:20.209 15:32:21 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:20.209 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:20.209 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:09:20.209 00:09:20.209 --- 10.0.0.1 ping statistics --- 00:09:20.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.209 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:09:20.209 15:32:21 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:20.209 15:32:21 -- nvmf/common.sh@422 -- # return 0 00:09:20.209 15:32:21 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:20.209 15:32:21 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:20.209 15:32:21 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:20.209 15:32:21 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:20.209 15:32:21 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:20.210 15:32:21 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:20.210 15:32:21 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:20.210 15:32:21 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:20.210 15:32:21 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:20.210 15:32:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:20.210 15:32:21 -- common/autotest_common.sh@10 -- # set +x 00:09:20.210 15:32:21 -- nvmf/common.sh@470 -- # nvmfpid=67779 00:09:20.210 15:32:21 -- nvmf/common.sh@471 -- # waitforlisten 67779 00:09:20.210 15:32:21 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:20.210 15:32:21 -- common/autotest_common.sh@817 -- # '[' -z 67779 ']' 00:09:20.210 15:32:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.210 15:32:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:20.210 15:32:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.210 15:32:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:20.210 15:32:21 -- common/autotest_common.sh@10 -- # set +x 00:09:20.210 [2024-04-17 15:32:21.510726] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:09:20.210 [2024-04-17 15:32:21.510852] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:20.469 [2024-04-17 15:32:21.653947] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.469 [2024-04-17 15:32:21.773481] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:20.469 [2024-04-17 15:32:21.773529] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:20.469 [2024-04-17 15:32:21.773557] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:20.469 [2024-04-17 15:32:21.773565] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:20.469 [2024-04-17 15:32:21.773572] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:20.469 [2024-04-17 15:32:21.773597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.036 15:32:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:21.037 15:32:22 -- common/autotest_common.sh@850 -- # return 0 00:09:21.037 15:32:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:21.037 15:32:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:21.037 15:32:22 -- common/autotest_common.sh@10 -- # set +x 00:09:21.318 15:32:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:21.318 15:32:22 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:21.318 15:32:22 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:21.318 15:32:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:21.318 15:32:22 -- common/autotest_common.sh@10 -- # set +x 00:09:21.318 [2024-04-17 15:32:22.487657] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:21.318 15:32:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:21.318 15:32:22 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:21.318 15:32:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:21.318 15:32:22 -- common/autotest_common.sh@10 -- # set +x 00:09:21.318 15:32:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:21.318 15:32:22 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:21.318 15:32:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:21.318 15:32:22 -- common/autotest_common.sh@10 -- # set +x 00:09:21.318 [2024-04-17 15:32:22.503744] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:21.318 15:32:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:21.318 15:32:22 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:21.318 15:32:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:21.318 15:32:22 -- common/autotest_common.sh@10 -- # set +x 00:09:21.318 15:32:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:21.318 15:32:22 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:21.318 15:32:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:21.318 15:32:22 -- common/autotest_common.sh@10 -- # set +x 00:09:21.318 malloc0 00:09:21.318 15:32:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:21.318 15:32:22 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:21.318 15:32:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:21.318 15:32:22 -- common/autotest_common.sh@10 -- # set +x 00:09:21.318 15:32:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:21.318 15:32:22 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:21.318 15:32:22 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:21.318 15:32:22 -- nvmf/common.sh@521 -- # config=() 00:09:21.318 15:32:22 -- nvmf/common.sh@521 -- # local subsystem config 00:09:21.318 15:32:22 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:09:21.318 15:32:22 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:09:21.318 { 00:09:21.318 "params": { 00:09:21.318 "name": "Nvme$subsystem", 00:09:21.318 "trtype": "$TEST_TRANSPORT", 00:09:21.318 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:21.318 "adrfam": "ipv4", 00:09:21.318 "trsvcid": "$NVMF_PORT", 00:09:21.318 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:21.318 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:21.318 "hdgst": ${hdgst:-false}, 00:09:21.318 "ddgst": ${ddgst:-false} 00:09:21.318 }, 00:09:21.318 "method": "bdev_nvme_attach_controller" 00:09:21.318 } 00:09:21.318 EOF 00:09:21.318 )") 00:09:21.318 15:32:22 -- nvmf/common.sh@543 -- # cat 00:09:21.318 15:32:22 -- nvmf/common.sh@545 -- # jq . 00:09:21.318 15:32:22 -- nvmf/common.sh@546 -- # IFS=, 00:09:21.318 15:32:22 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:09:21.318 "params": { 00:09:21.318 "name": "Nvme1", 00:09:21.318 "trtype": "tcp", 00:09:21.318 "traddr": "10.0.0.2", 00:09:21.318 "adrfam": "ipv4", 00:09:21.318 "trsvcid": "4420", 00:09:21.318 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:21.318 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:21.318 "hdgst": false, 00:09:21.318 "ddgst": false 00:09:21.318 }, 00:09:21.318 "method": "bdev_nvme_attach_controller" 00:09:21.318 }' 00:09:21.318 [2024-04-17 15:32:22.596278] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:09:21.318 [2024-04-17 15:32:22.596375] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67817 ] 00:09:21.318 [2024-04-17 15:32:22.738338] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.585 [2024-04-17 15:32:22.856463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.585 [2024-04-17 15:32:22.865500] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:09:21.844 [2024-04-17 15:32:23.044314] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:09:21.844 Running I/O for 10 seconds... 00:09:31.821 00:09:31.821 Latency(us) 00:09:31.821 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:31.821 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:31.821 Verification LBA range: start 0x0 length 0x1000 00:09:31.821 Nvme1n1 : 10.01 6373.49 49.79 0.00 0.00 20021.54 2234.18 30146.56 00:09:31.821 =================================================================================================================== 00:09:31.821 Total : 6373.49 49.79 0.00 0.00 20021.54 2234.18 30146.56 00:09:32.080 15:32:33 -- target/zcopy.sh@39 -- # perfpid=67934 00:09:32.080 15:32:33 -- target/zcopy.sh@41 -- # xtrace_disable 00:09:32.080 15:32:33 -- common/autotest_common.sh@10 -- # set +x 00:09:32.080 15:32:33 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:32.080 15:32:33 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:32.080 15:32:33 -- nvmf/common.sh@521 -- # config=() 00:09:32.080 15:32:33 -- nvmf/common.sh@521 -- # local subsystem config 00:09:32.080 15:32:33 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:09:32.080 15:32:33 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:09:32.080 { 00:09:32.080 "params": { 00:09:32.080 "name": "Nvme$subsystem", 00:09:32.080 "trtype": "$TEST_TRANSPORT", 00:09:32.080 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:32.080 "adrfam": "ipv4", 00:09:32.080 "trsvcid": "$NVMF_PORT", 00:09:32.080 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:32.080 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:32.080 "hdgst": ${hdgst:-false}, 00:09:32.080 "ddgst": ${ddgst:-false} 00:09:32.080 }, 00:09:32.080 "method": "bdev_nvme_attach_controller" 00:09:32.080 } 00:09:32.080 EOF 00:09:32.080 )") 00:09:32.080 15:32:33 -- nvmf/common.sh@543 -- # cat 00:09:32.080 15:32:33 -- nvmf/common.sh@545 -- # jq . 00:09:32.080 [2024-04-17 15:32:33.393194] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.080 [2024-04-17 15:32:33.393244] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.080 15:32:33 -- nvmf/common.sh@546 -- # IFS=, 00:09:32.080 15:32:33 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:09:32.080 "params": { 00:09:32.080 "name": "Nvme1", 00:09:32.080 "trtype": "tcp", 00:09:32.080 "traddr": "10.0.0.2", 00:09:32.080 "adrfam": "ipv4", 00:09:32.080 "trsvcid": "4420", 00:09:32.080 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:32.080 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:32.080 "hdgst": false, 00:09:32.080 "ddgst": false 00:09:32.080 }, 00:09:32.080 "method": "bdev_nvme_attach_controller" 00:09:32.080 }' 00:09:32.080 [2024-04-17 15:32:33.405141] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.080 [2024-04-17 15:32:33.405171] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.080 [2024-04-17 15:32:33.417134] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.080 [2024-04-17 15:32:33.417164] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.080 [2024-04-17 15:32:33.429139] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.080 [2024-04-17 15:32:33.429169] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.080 [2024-04-17 15:32:33.438841] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:09:32.080 [2024-04-17 15:32:33.438937] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67934 ] 00:09:32.080 [2024-04-17 15:32:33.441142] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.080 [2024-04-17 15:32:33.441327] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.080 [2024-04-17 15:32:33.453184] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.080 [2024-04-17 15:32:33.453391] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.080 [2024-04-17 15:32:33.465187] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.080 [2024-04-17 15:32:33.465353] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.080 [2024-04-17 15:32:33.477195] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.080 [2024-04-17 15:32:33.477364] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.080 [2024-04-17 15:32:33.489186] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.080 [2024-04-17 15:32:33.489321] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.080 [2024-04-17 15:32:33.501191] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.080 [2024-04-17 15:32:33.501325] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.080 [2024-04-17 15:32:33.513196] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.080 [2024-04-17 15:32:33.513330] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.339 [2024-04-17 15:32:33.525204] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.339 [2024-04-17 15:32:33.525339] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.339 [2024-04-17 15:32:33.537213] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.339 [2024-04-17 15:32:33.537396] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.339 [2024-04-17 15:32:33.549237] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.339 [2024-04-17 15:32:33.549418] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.339 [2024-04-17 15:32:33.561247] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.339 [2024-04-17 15:32:33.561408] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.339 [2024-04-17 15:32:33.573244] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.339 [2024-04-17 15:32:33.573404] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.339 [2024-04-17 15:32:33.579510] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.339 [2024-04-17 15:32:33.585243] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.339 [2024-04-17 15:32:33.585272] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.339 [2024-04-17 15:32:33.597246] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.339 [2024-04-17 15:32:33.597275] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.339 [2024-04-17 15:32:33.609244] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.339 [2024-04-17 15:32:33.609272] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.339 [2024-04-17 15:32:33.621266] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.339 [2024-04-17 15:32:33.621301] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.339 [2024-04-17 15:32:33.633272] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.339 [2024-04-17 15:32:33.633309] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.339 [2024-04-17 15:32:33.645258] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.339 [2024-04-17 15:32:33.645285] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.339 [2024-04-17 15:32:33.657255] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.339 [2024-04-17 15:32:33.657281] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.339 [2024-04-17 15:32:33.669264] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.339 [2024-04-17 15:32:33.669294] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.339 [2024-04-17 15:32:33.681260] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.339 [2024-04-17 15:32:33.681288] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.339 [2024-04-17 15:32:33.693261] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.339 [2024-04-17 15:32:33.693287] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.339 [2024-04-17 15:32:33.705265] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.339 [2024-04-17 15:32:33.705292] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.339 [2024-04-17 15:32:33.709391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.339 [2024-04-17 15:32:33.717272] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.339 [2024-04-17 15:32:33.717303] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.339 [2024-04-17 15:32:33.718353] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:09:32.339 [2024-04-17 15:32:33.729275] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.339 [2024-04-17 15:32:33.729304] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.339 [2024-04-17 15:32:33.741320] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.339 [2024-04-17 15:32:33.741353] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.339 [2024-04-17 15:32:33.753323] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.339 [2024-04-17 15:32:33.753361] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.339 [2024-04-17 15:32:33.765325] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.339 [2024-04-17 15:32:33.765372] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.339 [2024-04-17 15:32:33.777340] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.339 [2024-04-17 15:32:33.777373] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.598 [2024-04-17 15:32:33.789329] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.598 [2024-04-17 15:32:33.789362] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.598 [2024-04-17 15:32:33.801307] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.598 [2024-04-17 15:32:33.801338] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.598 [2024-04-17 15:32:33.813326] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.598 [2024-04-17 15:32:33.813358] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.598 [2024-04-17 15:32:33.825356] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.598 [2024-04-17 15:32:33.825420] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.598 [2024-04-17 15:32:33.837351] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.598 [2024-04-17 15:32:33.837410] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.598 [2024-04-17 15:32:33.849350] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.598 [2024-04-17 15:32:33.849392] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.598 [2024-04-17 15:32:33.861344] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.598 [2024-04-17 15:32:33.861381] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.598 [2024-04-17 15:32:33.873350] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.598 [2024-04-17 15:32:33.873381] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.598 [2024-04-17 15:32:33.885355] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.598 [2024-04-17 15:32:33.885388] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.598 [2024-04-17 15:32:33.897356] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.598 [2024-04-17 15:32:33.897388] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.598 [2024-04-17 15:32:33.909365] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.598 [2024-04-17 15:32:33.909396] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.598 [2024-04-17 15:32:33.921393] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.598 [2024-04-17 15:32:33.921434] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.599 [2024-04-17 15:32:33.928748] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:09:32.599 Running I/O for 5 seconds... 00:09:32.599 [2024-04-17 15:32:33.933404] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.599 [2024-04-17 15:32:33.933436] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.599 [2024-04-17 15:32:33.952070] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.599 [2024-04-17 15:32:33.952149] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.599 [2024-04-17 15:32:33.966177] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.599 [2024-04-17 15:32:33.966248] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.599 [2024-04-17 15:32:33.981529] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.599 [2024-04-17 15:32:33.981574] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.599 [2024-04-17 15:32:33.990809] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.599 [2024-04-17 15:32:33.990853] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.599 [2024-04-17 15:32:34.005707] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.599 [2024-04-17 15:32:34.005742] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.599 [2024-04-17 15:32:34.021994] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.599 [2024-04-17 15:32:34.022029] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.599 [2024-04-17 15:32:34.037545] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.599 [2024-04-17 15:32:34.037579] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.858 [2024-04-17 15:32:34.055743] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.858 [2024-04-17 15:32:34.055785] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.858 [2024-04-17 15:32:34.072338] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.858 [2024-04-17 15:32:34.072373] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.858 [2024-04-17 15:32:34.088244] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.858 [2024-04-17 15:32:34.088277] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.858 [2024-04-17 15:32:34.105707] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.858 [2024-04-17 15:32:34.105741] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.858 [2024-04-17 15:32:34.120636] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.858 [2024-04-17 15:32:34.120669] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.858 [2024-04-17 15:32:34.132583] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.858 [2024-04-17 15:32:34.132618] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.858 [2024-04-17 15:32:34.148927] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.858 [2024-04-17 15:32:34.148961] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.858 [2024-04-17 15:32:34.165525] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.858 [2024-04-17 15:32:34.165558] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.858 [2024-04-17 15:32:34.183425] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.858 [2024-04-17 15:32:34.183460] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.858 [2024-04-17 15:32:34.197274] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.858 [2024-04-17 15:32:34.197326] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.858 [2024-04-17 15:32:34.213070] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.858 [2024-04-17 15:32:34.213111] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.858 [2024-04-17 15:32:34.230521] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.858 [2024-04-17 15:32:34.230555] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.858 [2024-04-17 15:32:34.247255] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.858 [2024-04-17 15:32:34.247289] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.858 [2024-04-17 15:32:34.262959] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.858 [2024-04-17 15:32:34.262990] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.858 [2024-04-17 15:32:34.281307] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.858 [2024-04-17 15:32:34.281342] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.858 [2024-04-17 15:32:34.296412] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.858 [2024-04-17 15:32:34.296445] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.117 [2024-04-17 15:32:34.312194] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.117 [2024-04-17 15:32:34.312235] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.117 [2024-04-17 15:32:34.328406] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.117 [2024-04-17 15:32:34.328439] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.117 [2024-04-17 15:32:34.346036] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.117 [2024-04-17 15:32:34.346072] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.117 [2024-04-17 15:32:34.361638] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.117 [2024-04-17 15:32:34.361671] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.117 [2024-04-17 15:32:34.372334] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.117 [2024-04-17 15:32:34.372367] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.117 [2024-04-17 15:32:34.389086] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.117 [2024-04-17 15:32:34.389150] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.117 [2024-04-17 15:32:34.403571] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.117 [2024-04-17 15:32:34.403609] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.117 [2024-04-17 15:32:34.420912] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.117 [2024-04-17 15:32:34.420949] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.117 [2024-04-17 15:32:34.434956] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.117 [2024-04-17 15:32:34.434989] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.117 [2024-04-17 15:32:34.450958] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.117 [2024-04-17 15:32:34.450992] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.117 [2024-04-17 15:32:34.467601] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.117 [2024-04-17 15:32:34.467635] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.117 [2024-04-17 15:32:34.485449] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.117 [2024-04-17 15:32:34.485491] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.117 [2024-04-17 15:32:34.499568] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.117 [2024-04-17 15:32:34.499603] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.117 [2024-04-17 15:32:34.515073] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.117 [2024-04-17 15:32:34.515110] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.117 [2024-04-17 15:32:34.533581] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.117 [2024-04-17 15:32:34.533615] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.117 [2024-04-17 15:32:34.548969] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.117 [2024-04-17 15:32:34.549019] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.375 [2024-04-17 15:32:34.568192] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.375 [2024-04-17 15:32:34.568240] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.375 [2024-04-17 15:32:34.583153] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.375 [2024-04-17 15:32:34.583189] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.375 [2024-04-17 15:32:34.593541] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.375 [2024-04-17 15:32:34.593574] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.375 [2024-04-17 15:32:34.608569] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.376 [2024-04-17 15:32:34.608603] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.376 [2024-04-17 15:32:34.625583] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.376 [2024-04-17 15:32:34.625617] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.376 [2024-04-17 15:32:34.641790] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.376 [2024-04-17 15:32:34.641859] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.376 [2024-04-17 15:32:34.660836] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.376 [2024-04-17 15:32:34.660870] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.376 [2024-04-17 15:32:34.676184] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.376 [2024-04-17 15:32:34.676232] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.376 [2024-04-17 15:32:34.693210] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.376 [2024-04-17 15:32:34.693259] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.376 [2024-04-17 15:32:34.709588] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.376 [2024-04-17 15:32:34.709623] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.376 [2024-04-17 15:32:34.727371] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.376 [2024-04-17 15:32:34.727406] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.376 [2024-04-17 15:32:34.741353] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.376 [2024-04-17 15:32:34.741405] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.376 [2024-04-17 15:32:34.757112] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.376 [2024-04-17 15:32:34.757149] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.376 [2024-04-17 15:32:34.767206] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.376 [2024-04-17 15:32:34.767241] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.376 [2024-04-17 15:32:34.782375] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.376 [2024-04-17 15:32:34.782409] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.376 [2024-04-17 15:32:34.799681] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.376 [2024-04-17 15:32:34.799731] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.376 [2024-04-17 15:32:34.815677] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.376 [2024-04-17 15:32:34.815731] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.634 [2024-04-17 15:32:34.833280] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.634 [2024-04-17 15:32:34.833320] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.634 [2024-04-17 15:32:34.849495] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.634 [2024-04-17 15:32:34.849530] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.634 [2024-04-17 15:32:34.866162] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.634 [2024-04-17 15:32:34.866215] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.634 [2024-04-17 15:32:34.882543] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.634 [2024-04-17 15:32:34.882577] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.634 [2024-04-17 15:32:34.899109] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.634 [2024-04-17 15:32:34.899143] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.634 [2024-04-17 15:32:34.915419] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.634 [2024-04-17 15:32:34.915453] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.634 [2024-04-17 15:32:34.932693] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.634 [2024-04-17 15:32:34.932729] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.634 [2024-04-17 15:32:34.947842] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.634 [2024-04-17 15:32:34.947892] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.634 [2024-04-17 15:32:34.963914] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.634 [2024-04-17 15:32:34.963950] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.634 [2024-04-17 15:32:34.973641] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.634 [2024-04-17 15:32:34.973675] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.634 [2024-04-17 15:32:34.989621] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.634 [2024-04-17 15:32:34.989655] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.634 [2024-04-17 15:32:35.006822] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.634 [2024-04-17 15:32:35.006868] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.634 [2024-04-17 15:32:35.024109] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.634 [2024-04-17 15:32:35.024142] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.634 [2024-04-17 15:32:35.039263] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.634 [2024-04-17 15:32:35.039298] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.634 [2024-04-17 15:32:35.055343] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.634 [2024-04-17 15:32:35.055380] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.634 [2024-04-17 15:32:35.072137] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.634 [2024-04-17 15:32:35.072227] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.893 [2024-04-17 15:32:35.088354] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.893 [2024-04-17 15:32:35.088388] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.893 [2024-04-17 15:32:35.104747] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.893 [2024-04-17 15:32:35.104831] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.893 [2024-04-17 15:32:35.120898] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.893 [2024-04-17 15:32:35.120931] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.893 [2024-04-17 15:32:35.138940] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.893 [2024-04-17 15:32:35.138972] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.893 [2024-04-17 15:32:35.153169] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.893 [2024-04-17 15:32:35.153204] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.893 [2024-04-17 15:32:35.167810] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.893 [2024-04-17 15:32:35.167876] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.893 [2024-04-17 15:32:35.185456] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.893 [2024-04-17 15:32:35.185501] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.893 [2024-04-17 15:32:35.199671] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.893 [2024-04-17 15:32:35.199705] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.893 [2024-04-17 15:32:35.215098] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.893 [2024-04-17 15:32:35.215148] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.893 [2024-04-17 15:32:35.232831] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.893 [2024-04-17 15:32:35.232864] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.893 [2024-04-17 15:32:35.247997] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.893 [2024-04-17 15:32:35.248029] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.893 [2024-04-17 15:32:35.263371] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.893 [2024-04-17 15:32:35.263405] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.893 [2024-04-17 15:32:35.281384] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.893 [2024-04-17 15:32:35.281419] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.893 [2024-04-17 15:32:35.297013] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.893 [2024-04-17 15:32:35.297056] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.893 [2024-04-17 15:32:35.306319] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.893 [2024-04-17 15:32:35.306353] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.893 [2024-04-17 15:32:35.322510] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.893 [2024-04-17 15:32:35.322544] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.152 [2024-04-17 15:32:35.340199] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.152 [2024-04-17 15:32:35.340238] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.152 [2024-04-17 15:32:35.356250] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.152 [2024-04-17 15:32:35.356289] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.152 [2024-04-17 15:32:35.372814] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.152 [2024-04-17 15:32:35.372870] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.152 [2024-04-17 15:32:35.391970] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.152 [2024-04-17 15:32:35.392010] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.152 [2024-04-17 15:32:35.406720] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.152 [2024-04-17 15:32:35.406767] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.152 [2024-04-17 15:32:35.424665] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.152 [2024-04-17 15:32:35.424699] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.152 [2024-04-17 15:32:35.439865] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.152 [2024-04-17 15:32:35.439897] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.152 [2024-04-17 15:32:35.455122] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.152 [2024-04-17 15:32:35.455170] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.152 [2024-04-17 15:32:35.473532] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.152 [2024-04-17 15:32:35.473566] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.152 [2024-04-17 15:32:35.488017] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.152 [2024-04-17 15:32:35.488054] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.152 [2024-04-17 15:32:35.503611] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.152 [2024-04-17 15:32:35.503654] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.152 [2024-04-17 15:32:35.521835] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.152 [2024-04-17 15:32:35.521869] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.152 [2024-04-17 15:32:35.537402] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.152 [2024-04-17 15:32:35.537436] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.152 [2024-04-17 15:32:35.554269] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.152 [2024-04-17 15:32:35.554303] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.152 [2024-04-17 15:32:35.572510] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.152 [2024-04-17 15:32:35.572544] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.152 [2024-04-17 15:32:35.586556] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.152 [2024-04-17 15:32:35.586590] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.411 [2024-04-17 15:32:35.603413] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.411 [2024-04-17 15:32:35.603446] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.411 [2024-04-17 15:32:35.619105] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.411 [2024-04-17 15:32:35.619155] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.411 [2024-04-17 15:32:35.636430] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.411 [2024-04-17 15:32:35.636464] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.411 [2024-04-17 15:32:35.652741] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.411 [2024-04-17 15:32:35.652784] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.411 [2024-04-17 15:32:35.670790] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.411 [2024-04-17 15:32:35.670832] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.411 [2024-04-17 15:32:35.686340] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.411 [2024-04-17 15:32:35.686375] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.411 [2024-04-17 15:32:35.695732] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.411 [2024-04-17 15:32:35.695812] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.411 [2024-04-17 15:32:35.711053] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.411 [2024-04-17 15:32:35.711087] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.411 [2024-04-17 15:32:35.725674] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.411 [2024-04-17 15:32:35.725707] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.411 [2024-04-17 15:32:35.742198] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.411 [2024-04-17 15:32:35.742265] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.411 [2024-04-17 15:32:35.757976] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.411 [2024-04-17 15:32:35.758012] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.411 [2024-04-17 15:32:35.767228] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.411 [2024-04-17 15:32:35.767265] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.411 [2024-04-17 15:32:35.783905] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.411 [2024-04-17 15:32:35.783940] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.411 [2024-04-17 15:32:35.801370] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.411 [2024-04-17 15:32:35.801404] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.411 [2024-04-17 15:32:35.816274] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.411 [2024-04-17 15:32:35.816309] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.411 [2024-04-17 15:32:35.833901] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.411 [2024-04-17 15:32:35.833939] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.411 [2024-04-17 15:32:35.849147] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.411 [2024-04-17 15:32:35.849197] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.670 [2024-04-17 15:32:35.859035] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.670 [2024-04-17 15:32:35.859069] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.670 [2024-04-17 15:32:35.875046] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.670 [2024-04-17 15:32:35.875081] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.670 [2024-04-17 15:32:35.884729] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.670 [2024-04-17 15:32:35.884809] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.670 [2024-04-17 15:32:35.900631] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.670 [2024-04-17 15:32:35.900666] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.670 [2024-04-17 15:32:35.916077] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.670 [2024-04-17 15:32:35.916112] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.670 [2024-04-17 15:32:35.925236] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.670 [2024-04-17 15:32:35.925271] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.670 [2024-04-17 15:32:35.940878] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.670 [2024-04-17 15:32:35.940912] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.670 [2024-04-17 15:32:35.957134] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.670 [2024-04-17 15:32:35.957168] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.670 [2024-04-17 15:32:35.972211] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.670 [2024-04-17 15:32:35.972246] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.670 [2024-04-17 15:32:35.987381] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.670 [2024-04-17 15:32:35.987415] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.670 [2024-04-17 15:32:36.004626] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.670 [2024-04-17 15:32:36.004659] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.670 [2024-04-17 15:32:36.020886] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.670 [2024-04-17 15:32:36.020919] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.670 [2024-04-17 15:32:36.038322] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.670 [2024-04-17 15:32:36.038357] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.670 [2024-04-17 15:32:36.052812] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.670 [2024-04-17 15:32:36.052846] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.670 [2024-04-17 15:32:36.068108] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.670 [2024-04-17 15:32:36.068163] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.670 [2024-04-17 15:32:36.079747] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.670 [2024-04-17 15:32:36.079821] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.670 [2024-04-17 15:32:36.095570] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.670 [2024-04-17 15:32:36.095606] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.929 [2024-04-17 15:32:36.112691] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.929 [2024-04-17 15:32:36.112725] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.929 [2024-04-17 15:32:36.127964] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.929 [2024-04-17 15:32:36.127999] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.929 [2024-04-17 15:32:36.143266] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.929 [2024-04-17 15:32:36.143299] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.929 [2024-04-17 15:32:36.159227] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.929 [2024-04-17 15:32:36.159261] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.929 [2024-04-17 15:32:36.175736] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.929 [2024-04-17 15:32:36.175835] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.929 [2024-04-17 15:32:36.192465] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.929 [2024-04-17 15:32:36.192500] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.929 [2024-04-17 15:32:36.210874] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.929 [2024-04-17 15:32:36.210907] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.929 [2024-04-17 15:32:36.224909] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.929 [2024-04-17 15:32:36.224942] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.929 [2024-04-17 15:32:36.240615] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.929 [2024-04-17 15:32:36.240648] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.929 [2024-04-17 15:32:36.258558] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.929 [2024-04-17 15:32:36.258593] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.929 [2024-04-17 15:32:36.274927] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.929 [2024-04-17 15:32:36.274961] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.929 [2024-04-17 15:32:36.292859] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.929 [2024-04-17 15:32:36.292893] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.929 [2024-04-17 15:32:36.309022] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.929 [2024-04-17 15:32:36.309086] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.929 [2024-04-17 15:32:36.331088] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.929 [2024-04-17 15:32:36.331125] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.929 [2024-04-17 15:32:36.346254] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.929 [2024-04-17 15:32:36.346289] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.929 [2024-04-17 15:32:36.355981] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.929 [2024-04-17 15:32:36.356018] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.188 [2024-04-17 15:32:36.371864] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.188 [2024-04-17 15:32:36.371899] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.188 [2024-04-17 15:32:36.388935] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.188 [2024-04-17 15:32:36.388987] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.188 [2024-04-17 15:32:36.406614] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.188 [2024-04-17 15:32:36.406648] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.188 [2024-04-17 15:32:36.420868] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.188 [2024-04-17 15:32:36.420903] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.188 [2024-04-17 15:32:36.436605] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.189 [2024-04-17 15:32:36.436642] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.189 [2024-04-17 15:32:36.452683] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.189 [2024-04-17 15:32:36.452735] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.189 [2024-04-17 15:32:36.471791] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.189 [2024-04-17 15:32:36.471843] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.189 [2024-04-17 15:32:36.485974] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.189 [2024-04-17 15:32:36.486012] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.189 [2024-04-17 15:32:36.502927] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.189 [2024-04-17 15:32:36.502961] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.189 [2024-04-17 15:32:36.518171] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.189 [2024-04-17 15:32:36.518221] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.189 [2024-04-17 15:32:36.533967] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.189 [2024-04-17 15:32:36.534000] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.189 [2024-04-17 15:32:36.551313] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.189 [2024-04-17 15:32:36.551346] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.189 [2024-04-17 15:32:36.567044] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.189 [2024-04-17 15:32:36.567078] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.189 [2024-04-17 15:32:36.584669] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.189 [2024-04-17 15:32:36.584703] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.189 [2024-04-17 15:32:36.599518] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.189 [2024-04-17 15:32:36.599552] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.189 [2024-04-17 15:32:36.614795] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.189 [2024-04-17 15:32:36.614856] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.448 [2024-04-17 15:32:36.633077] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.448 [2024-04-17 15:32:36.633110] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.448 [2024-04-17 15:32:36.647542] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.448 [2024-04-17 15:32:36.647578] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.448 [2024-04-17 15:32:36.663740] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.448 [2024-04-17 15:32:36.663820] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.448 [2024-04-17 15:32:36.681019] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.448 [2024-04-17 15:32:36.681054] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.448 [2024-04-17 15:32:36.697117] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.448 [2024-04-17 15:32:36.697151] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.448 [2024-04-17 15:32:36.715098] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.448 [2024-04-17 15:32:36.715147] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.448 [2024-04-17 15:32:36.728620] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.448 [2024-04-17 15:32:36.728654] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.448 [2024-04-17 15:32:36.744356] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.448 [2024-04-17 15:32:36.744390] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.448 [2024-04-17 15:32:36.762267] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.448 [2024-04-17 15:32:36.762301] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.448 [2024-04-17 15:32:36.778863] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.448 [2024-04-17 15:32:36.778897] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.448 [2024-04-17 15:32:36.795115] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.448 [2024-04-17 15:32:36.795149] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.448 [2024-04-17 15:32:36.812301] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.448 [2024-04-17 15:32:36.812334] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.448 [2024-04-17 15:32:36.830436] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.448 [2024-04-17 15:32:36.830473] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.448 [2024-04-17 15:32:36.845798] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.448 [2024-04-17 15:32:36.845838] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.448 [2024-04-17 15:32:36.855998] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.448 [2024-04-17 15:32:36.856032] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.448 [2024-04-17 15:32:36.867874] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.448 [2024-04-17 15:32:36.867916] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.448 [2024-04-17 15:32:36.882944] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.448 [2024-04-17 15:32:36.882978] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.707 [2024-04-17 15:32:36.899270] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.707 [2024-04-17 15:32:36.899303] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.707 [2024-04-17 15:32:36.915614] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.707 [2024-04-17 15:32:36.915648] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.707 [2024-04-17 15:32:36.932623] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.707 [2024-04-17 15:32:36.932657] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.707 [2024-04-17 15:32:36.949280] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.707 [2024-04-17 15:32:36.949315] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.707 [2024-04-17 15:32:36.965334] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.707 [2024-04-17 15:32:36.965385] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.707 [2024-04-17 15:32:36.982352] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.707 [2024-04-17 15:32:36.982389] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.707 [2024-04-17 15:32:36.999039] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.707 [2024-04-17 15:32:36.999074] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.707 [2024-04-17 15:32:37.015594] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.707 [2024-04-17 15:32:37.015629] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.707 [2024-04-17 15:32:37.031694] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.707 [2024-04-17 15:32:37.031730] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.707 [2024-04-17 15:32:37.050354] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.707 [2024-04-17 15:32:37.050388] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.707 [2024-04-17 15:32:37.064175] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.707 [2024-04-17 15:32:37.064225] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.707 [2024-04-17 15:32:37.081213] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.707 [2024-04-17 15:32:37.081248] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.707 [2024-04-17 15:32:37.097416] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.707 [2024-04-17 15:32:37.097451] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.707 [2024-04-17 15:32:37.113865] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.707 [2024-04-17 15:32:37.113926] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.707 [2024-04-17 15:32:37.130502] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.707 [2024-04-17 15:32:37.130539] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.707 [2024-04-17 15:32:37.147335] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.707 [2024-04-17 15:32:37.147370] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.966 [2024-04-17 15:32:37.163501] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.966 [2024-04-17 15:32:37.163537] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.966 [2024-04-17 15:32:37.172919] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.966 [2024-04-17 15:32:37.172969] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.966 [2024-04-17 15:32:37.188077] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.966 [2024-04-17 15:32:37.188128] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.966 [2024-04-17 15:32:37.198227] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.966 [2024-04-17 15:32:37.198278] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.966 [2024-04-17 15:32:37.214086] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.966 [2024-04-17 15:32:37.214124] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.966 [2024-04-17 15:32:37.223284] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.966 [2024-04-17 15:32:37.223318] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.966 [2024-04-17 15:32:37.238548] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.966 [2024-04-17 15:32:37.238583] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.966 [2024-04-17 15:32:37.253992] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.966 [2024-04-17 15:32:37.254028] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.966 [2024-04-17 15:32:37.272416] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.966 [2024-04-17 15:32:37.272450] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.966 [2024-04-17 15:32:37.286541] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.966 [2024-04-17 15:32:37.286574] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.966 [2024-04-17 15:32:37.302028] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.966 [2024-04-17 15:32:37.302063] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.966 [2024-04-17 15:32:37.320385] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.966 [2024-04-17 15:32:37.320419] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.966 [2024-04-17 15:32:37.335307] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.966 [2024-04-17 15:32:37.335341] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.966 [2024-04-17 15:32:37.350565] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.966 [2024-04-17 15:32:37.350598] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.966 [2024-04-17 15:32:37.369276] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.966 [2024-04-17 15:32:37.369310] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.966 [2024-04-17 15:32:37.383476] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.966 [2024-04-17 15:32:37.383524] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.966 [2024-04-17 15:32:37.400099] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.966 [2024-04-17 15:32:37.400134] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.225 [2024-04-17 15:32:37.415530] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.225 [2024-04-17 15:32:37.415565] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.225 [2024-04-17 15:32:37.427589] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.225 [2024-04-17 15:32:37.427622] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.225 [2024-04-17 15:32:37.443210] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.225 [2024-04-17 15:32:37.443243] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.225 [2024-04-17 15:32:37.460523] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.225 [2024-04-17 15:32:37.460556] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.225 [2024-04-17 15:32:37.476715] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.225 [2024-04-17 15:32:37.476749] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.225 [2024-04-17 15:32:37.492982] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.225 [2024-04-17 15:32:37.493016] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.225 [2024-04-17 15:32:37.510437] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.225 [2024-04-17 15:32:37.510481] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.225 [2024-04-17 15:32:37.527475] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.225 [2024-04-17 15:32:37.527509] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.225 [2024-04-17 15:32:37.544047] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.225 [2024-04-17 15:32:37.544080] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.225 [2024-04-17 15:32:37.561831] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.225 [2024-04-17 15:32:37.561885] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.225 [2024-04-17 15:32:37.575659] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.225 [2024-04-17 15:32:37.575693] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.225 [2024-04-17 15:32:37.590957] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.225 [2024-04-17 15:32:37.590990] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.225 [2024-04-17 15:32:37.601973] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.225 [2024-04-17 15:32:37.602012] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.225 [2024-04-17 15:32:37.618399] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.225 [2024-04-17 15:32:37.618435] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.225 [2024-04-17 15:32:37.633952] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.225 [2024-04-17 15:32:37.633988] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.225 [2024-04-17 15:32:37.645514] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.225 [2024-04-17 15:32:37.645548] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.225 [2024-04-17 15:32:37.661470] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.225 [2024-04-17 15:32:37.661505] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.484 [2024-04-17 15:32:37.676956] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.484 [2024-04-17 15:32:37.676989] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.484 [2024-04-17 15:32:37.689393] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.484 [2024-04-17 15:32:37.689426] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.484 [2024-04-17 15:32:37.705631] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.484 [2024-04-17 15:32:37.705665] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.484 [2024-04-17 15:32:37.721204] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.484 [2024-04-17 15:32:37.721240] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.484 [2024-04-17 15:32:37.738590] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.484 [2024-04-17 15:32:37.738624] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.484 [2024-04-17 15:32:37.753955] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.484 [2024-04-17 15:32:37.753992] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.484 [2024-04-17 15:32:37.763092] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.484 [2024-04-17 15:32:37.763125] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.484 [2024-04-17 15:32:37.775006] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.484 [2024-04-17 15:32:37.775038] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.484 [2024-04-17 15:32:37.792407] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.484 [2024-04-17 15:32:37.792441] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.484 [2024-04-17 15:32:37.807647] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.484 [2024-04-17 15:32:37.807690] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.484 [2024-04-17 15:32:37.819237] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.484 [2024-04-17 15:32:37.819271] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.484 [2024-04-17 15:32:37.835321] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.484 [2024-04-17 15:32:37.835355] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.484 [2024-04-17 15:32:37.852072] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.484 [2024-04-17 15:32:37.852108] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.484 [2024-04-17 15:32:37.869271] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.484 [2024-04-17 15:32:37.869332] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.484 [2024-04-17 15:32:37.885454] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.484 [2024-04-17 15:32:37.885488] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.484 [2024-04-17 15:32:37.903310] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.484 [2024-04-17 15:32:37.903345] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.484 [2024-04-17 15:32:37.918884] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.484 [2024-04-17 15:32:37.918919] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.744 [2024-04-17 15:32:37.930331] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.744 [2024-04-17 15:32:37.930365] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.744 [2024-04-17 15:32:37.946304] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.744 [2024-04-17 15:32:37.946338] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.744 [2024-04-17 15:32:37.963385] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.744 [2024-04-17 15:32:37.963419] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.744 [2024-04-17 15:32:37.979623] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.744 [2024-04-17 15:32:37.979663] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.744 [2024-04-17 15:32:37.997505] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.744 [2024-04-17 15:32:37.997540] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.744 [2024-04-17 15:32:38.014305] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.744 [2024-04-17 15:32:38.014340] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.744 [2024-04-17 15:32:38.030974] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.744 [2024-04-17 15:32:38.031010] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.744 [2024-04-17 15:32:38.048269] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.744 [2024-04-17 15:32:38.048304] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.744 [2024-04-17 15:32:38.065997] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.744 [2024-04-17 15:32:38.066032] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.744 [2024-04-17 15:32:38.080757] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.744 [2024-04-17 15:32:38.080817] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.744 [2024-04-17 15:32:38.096785] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.744 [2024-04-17 15:32:38.096818] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.744 [2024-04-17 15:32:38.114680] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.744 [2024-04-17 15:32:38.114714] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.744 [2024-04-17 15:32:38.129397] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.744 [2024-04-17 15:32:38.129435] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.744 [2024-04-17 15:32:38.144925] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.744 [2024-04-17 15:32:38.144964] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.744 [2024-04-17 15:32:38.153923] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.744 [2024-04-17 15:32:38.153960] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.744 [2024-04-17 15:32:38.169462] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.744 [2024-04-17 15:32:38.169508] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.003 [2024-04-17 15:32:38.185847] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.003 [2024-04-17 15:32:38.185901] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.003 [2024-04-17 15:32:38.202035] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.003 [2024-04-17 15:32:38.202069] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.003 [2024-04-17 15:32:38.221097] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.003 [2024-04-17 15:32:38.221132] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.003 [2024-04-17 15:32:38.236059] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.003 [2024-04-17 15:32:38.236111] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.003 [2024-04-17 15:32:38.246121] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.003 [2024-04-17 15:32:38.246158] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.003 [2024-04-17 15:32:38.261716] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.003 [2024-04-17 15:32:38.261780] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.003 [2024-04-17 15:32:38.277338] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.003 [2024-04-17 15:32:38.277375] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.003 [2024-04-17 15:32:38.286542] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.003 [2024-04-17 15:32:38.286577] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.003 [2024-04-17 15:32:38.303066] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.003 [2024-04-17 15:32:38.303102] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.003 [2024-04-17 15:32:38.322455] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.003 [2024-04-17 15:32:38.322490] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.003 [2024-04-17 15:32:38.336810] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.003 [2024-04-17 15:32:38.336841] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.003 [2024-04-17 15:32:38.348697] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.003 [2024-04-17 15:32:38.348753] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.003 [2024-04-17 15:32:38.364293] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.003 [2024-04-17 15:32:38.364328] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.003 [2024-04-17 15:32:38.380549] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.003 [2024-04-17 15:32:38.380584] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.003 [2024-04-17 15:32:38.397708] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.003 [2024-04-17 15:32:38.397743] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.003 [2024-04-17 15:32:38.414866] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.003 [2024-04-17 15:32:38.414898] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.003 [2024-04-17 15:32:38.431072] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.003 [2024-04-17 15:32:38.431107] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.262 [2024-04-17 15:32:38.448602] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.262 [2024-04-17 15:32:38.448638] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.262 [2024-04-17 15:32:38.464471] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.262 [2024-04-17 15:32:38.464505] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.262 [2024-04-17 15:32:38.482208] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.262 [2024-04-17 15:32:38.482242] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.262 [2024-04-17 15:32:38.499811] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.262 [2024-04-17 15:32:38.499844] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.262 [2024-04-17 15:32:38.514922] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.262 [2024-04-17 15:32:38.514956] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.262 [2024-04-17 15:32:38.530877] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.262 [2024-04-17 15:32:38.530909] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.262 [2024-04-17 15:32:38.546905] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.262 [2024-04-17 15:32:38.546937] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.262 [2024-04-17 15:32:38.565307] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.262 [2024-04-17 15:32:38.565355] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.262 [2024-04-17 15:32:38.581450] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.262 [2024-04-17 15:32:38.581482] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.262 [2024-04-17 15:32:38.597968] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.262 [2024-04-17 15:32:38.598003] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.262 [2024-04-17 15:32:38.616269] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.262 [2024-04-17 15:32:38.616303] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.262 [2024-04-17 15:32:38.632521] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.262 [2024-04-17 15:32:38.632587] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.262 [2024-04-17 15:32:38.648435] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.262 [2024-04-17 15:32:38.648470] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.262 [2024-04-17 15:32:38.666953] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.262 [2024-04-17 15:32:38.666986] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.262 [2024-04-17 15:32:38.682259] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.262 [2024-04-17 15:32:38.682293] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.262 [2024-04-17 15:32:38.693016] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.262 [2024-04-17 15:32:38.693051] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.521 [2024-04-17 15:32:38.709759] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.521 [2024-04-17 15:32:38.709857] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.521 [2024-04-17 15:32:38.725686] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.521 [2024-04-17 15:32:38.725720] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.521 [2024-04-17 15:32:38.743111] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.521 [2024-04-17 15:32:38.743146] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.521 [2024-04-17 15:32:38.758495] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.521 [2024-04-17 15:32:38.758529] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.521 [2024-04-17 15:32:38.770014] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.522 [2024-04-17 15:32:38.770050] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.522 [2024-04-17 15:32:38.786062] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.522 [2024-04-17 15:32:38.786099] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.522 [2024-04-17 15:32:38.801549] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.522 [2024-04-17 15:32:38.801593] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.522 [2024-04-17 15:32:38.812654] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.522 [2024-04-17 15:32:38.812688] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.522 [2024-04-17 15:32:38.828981] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.522 [2024-04-17 15:32:38.829017] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.522 [2024-04-17 15:32:38.844589] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.522 [2024-04-17 15:32:38.844622] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.522 [2024-04-17 15:32:38.856469] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.522 [2024-04-17 15:32:38.856502] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.522 [2024-04-17 15:32:38.872433] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.522 [2024-04-17 15:32:38.872473] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.522 [2024-04-17 15:32:38.888831] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.522 [2024-04-17 15:32:38.888866] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.522 [2024-04-17 15:32:38.904421] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.522 [2024-04-17 15:32:38.904452] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.522 [2024-04-17 15:32:38.919619] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.522 [2024-04-17 15:32:38.919653] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.522 [2024-04-17 15:32:38.928444] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.522 [2024-04-17 15:32:38.928479] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.522 [2024-04-17 15:32:38.940406] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.522 [2024-04-17 15:32:38.940440] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.522 00:09:37.522 Latency(us) 00:09:37.522 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:37.522 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:37.522 Nvme1n1 : 5.01 12415.17 96.99 0.00 0.00 10297.03 4230.05 19184.17 00:09:37.522 =================================================================================================================== 00:09:37.522 Total : 12415.17 96.99 0.00 0.00 10297.03 4230.05 19184.17 00:09:37.522 [2024-04-17 15:32:38.952422] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.522 [2024-04-17 15:32:38.952456] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.781 [2024-04-17 15:32:38.964415] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.781 [2024-04-17 15:32:38.964448] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.781 [2024-04-17 15:32:38.976438] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.781 [2024-04-17 15:32:38.976474] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.781 [2024-04-17 15:32:38.988460] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.781 [2024-04-17 15:32:38.988497] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.781 [2024-04-17 15:32:39.000469] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.781 [2024-04-17 15:32:39.000507] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.781 [2024-04-17 15:32:39.012474] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.781 [2024-04-17 15:32:39.012517] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.781 [2024-04-17 15:32:39.024497] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.781 [2024-04-17 15:32:39.024559] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.781 [2024-04-17 15:32:39.036488] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.781 [2024-04-17 15:32:39.036534] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.781 [2024-04-17 15:32:39.048519] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.782 [2024-04-17 15:32:39.048569] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.782 [2024-04-17 15:32:39.060516] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.782 [2024-04-17 15:32:39.060596] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.782 [2024-04-17 15:32:39.072507] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.782 [2024-04-17 15:32:39.072549] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.782 [2024-04-17 15:32:39.084508] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.782 [2024-04-17 15:32:39.084553] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.782 [2024-04-17 15:32:39.096508] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.782 [2024-04-17 15:32:39.096550] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.782 [2024-04-17 15:32:39.108508] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.782 [2024-04-17 15:32:39.108545] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.782 [2024-04-17 15:32:39.120511] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.782 [2024-04-17 15:32:39.120549] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.782 [2024-04-17 15:32:39.132511] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.782 [2024-04-17 15:32:39.132547] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.782 [2024-04-17 15:32:39.144498] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.782 [2024-04-17 15:32:39.144528] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.782 [2024-04-17 15:32:39.156492] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.782 [2024-04-17 15:32:39.156519] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.782 [2024-04-17 15:32:39.168498] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.782 [2024-04-17 15:32:39.168529] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.782 [2024-04-17 15:32:39.180503] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.782 [2024-04-17 15:32:39.180533] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.782 [2024-04-17 15:32:39.192527] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.782 [2024-04-17 15:32:39.192577] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.782 [2024-04-17 15:32:39.204512] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.782 [2024-04-17 15:32:39.204543] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.782 [2024-04-17 15:32:39.216508] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.782 [2024-04-17 15:32:39.216536] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.041 [2024-04-17 15:32:39.228513] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.041 [2024-04-17 15:32:39.228539] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.041 [2024-04-17 15:32:39.240512] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.041 [2024-04-17 15:32:39.240539] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.041 [2024-04-17 15:32:39.252542] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.041 [2024-04-17 15:32:39.252577] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.041 [2024-04-17 15:32:39.264554] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.041 [2024-04-17 15:32:39.264592] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.041 [2024-04-17 15:32:39.276534] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.041 [2024-04-17 15:32:39.276570] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.041 [2024-04-17 15:32:39.288549] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.041 [2024-04-17 15:32:39.288580] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.041 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (67934) - No such process 00:09:38.041 15:32:39 -- target/zcopy.sh@49 -- # wait 67934 00:09:38.041 15:32:39 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:38.041 15:32:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:38.041 15:32:39 -- common/autotest_common.sh@10 -- # set +x 00:09:38.041 15:32:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:38.041 15:32:39 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:38.041 15:32:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:38.041 15:32:39 -- common/autotest_common.sh@10 -- # set +x 00:09:38.041 delay0 00:09:38.041 15:32:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:38.041 15:32:39 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:38.041 15:32:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:38.041 15:32:39 -- common/autotest_common.sh@10 -- # set +x 00:09:38.041 15:32:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:38.041 15:32:39 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:38.299 [2024-04-17 15:32:39.484736] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:44.866 Initializing NVMe Controllers 00:09:44.866 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:44.866 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:44.866 Initialization complete. Launching workers. 00:09:44.866 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 74 00:09:44.866 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 361, failed to submit 33 00:09:44.866 success 241, unsuccess 120, failed 0 00:09:44.866 15:32:45 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:44.866 15:32:45 -- target/zcopy.sh@60 -- # nvmftestfini 00:09:44.866 15:32:45 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:44.866 15:32:45 -- nvmf/common.sh@117 -- # sync 00:09:44.866 15:32:45 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:44.866 15:32:45 -- nvmf/common.sh@120 -- # set +e 00:09:44.866 15:32:45 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:44.866 15:32:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:44.866 rmmod nvme_tcp 00:09:44.866 rmmod nvme_fabrics 00:09:44.866 rmmod nvme_keyring 00:09:44.866 15:32:45 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:44.866 15:32:45 -- nvmf/common.sh@124 -- # set -e 00:09:44.866 15:32:45 -- nvmf/common.sh@125 -- # return 0 00:09:44.866 15:32:45 -- nvmf/common.sh@478 -- # '[' -n 67779 ']' 00:09:44.866 15:32:45 -- nvmf/common.sh@479 -- # killprocess 67779 00:09:44.866 15:32:45 -- common/autotest_common.sh@936 -- # '[' -z 67779 ']' 00:09:44.866 15:32:45 -- common/autotest_common.sh@940 -- # kill -0 67779 00:09:44.866 15:32:45 -- common/autotest_common.sh@941 -- # uname 00:09:44.866 15:32:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:44.866 15:32:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67779 00:09:44.866 killing process with pid 67779 00:09:44.866 15:32:45 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:44.866 15:32:45 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:44.866 15:32:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67779' 00:09:44.866 15:32:45 -- common/autotest_common.sh@955 -- # kill 67779 00:09:44.866 15:32:45 -- common/autotest_common.sh@960 -- # wait 67779 00:09:44.866 15:32:46 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:44.866 15:32:46 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:44.866 15:32:46 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:44.866 15:32:46 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:44.866 15:32:46 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:44.866 15:32:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.866 15:32:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:44.866 15:32:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.866 15:32:46 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:44.866 00:09:44.866 real 0m25.056s 00:09:44.866 user 0m41.143s 00:09:44.866 sys 0m6.925s 00:09:44.866 ************************************ 00:09:44.866 END TEST nvmf_zcopy 00:09:44.866 ************************************ 00:09:44.866 15:32:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:44.866 15:32:46 -- common/autotest_common.sh@10 -- # set +x 00:09:44.866 15:32:46 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:44.866 15:32:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:44.866 15:32:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:44.866 15:32:46 -- common/autotest_common.sh@10 -- # set +x 00:09:44.866 ************************************ 00:09:44.866 START TEST nvmf_nmic 00:09:44.866 ************************************ 00:09:44.866 15:32:46 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:44.866 * Looking for test storage... 00:09:44.866 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:44.866 15:32:46 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:44.866 15:32:46 -- nvmf/common.sh@7 -- # uname -s 00:09:44.866 15:32:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:44.866 15:32:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:44.866 15:32:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:44.866 15:32:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:44.866 15:32:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:44.866 15:32:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:44.866 15:32:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:44.866 15:32:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:44.866 15:32:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:44.866 15:32:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:44.866 15:32:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02dfa913-00e4-4a25-ab2c-855f7283d4db 00:09:44.866 15:32:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=02dfa913-00e4-4a25-ab2c-855f7283d4db 00:09:44.866 15:32:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:44.866 15:32:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:44.866 15:32:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:44.866 15:32:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:44.866 15:32:46 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:44.866 15:32:46 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:44.866 15:32:46 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:44.866 15:32:46 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:44.866 15:32:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.866 15:32:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.866 15:32:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.866 15:32:46 -- paths/export.sh@5 -- # export PATH 00:09:44.866 15:32:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.866 15:32:46 -- nvmf/common.sh@47 -- # : 0 00:09:44.866 15:32:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:44.866 15:32:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:44.866 15:32:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:44.866 15:32:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:44.866 15:32:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:44.866 15:32:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:44.866 15:32:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:44.866 15:32:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:44.866 15:32:46 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:44.866 15:32:46 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:44.866 15:32:46 -- target/nmic.sh@14 -- # nvmftestinit 00:09:44.866 15:32:46 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:44.866 15:32:46 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:44.866 15:32:46 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:44.866 15:32:46 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:44.866 15:32:46 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:44.866 15:32:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.866 15:32:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:44.866 15:32:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.125 15:32:46 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:09:45.125 15:32:46 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:09:45.125 15:32:46 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:09:45.125 15:32:46 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:09:45.125 15:32:46 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:09:45.125 15:32:46 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:09:45.125 15:32:46 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:45.125 15:32:46 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:45.125 15:32:46 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:45.125 15:32:46 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:45.125 15:32:46 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:45.125 15:32:46 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:45.125 15:32:46 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:45.125 15:32:46 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:45.125 15:32:46 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:45.125 15:32:46 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:45.125 15:32:46 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:45.125 15:32:46 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:45.125 15:32:46 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:45.125 15:32:46 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:45.125 Cannot find device "nvmf_tgt_br" 00:09:45.125 15:32:46 -- nvmf/common.sh@155 -- # true 00:09:45.125 15:32:46 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:45.125 Cannot find device "nvmf_tgt_br2" 00:09:45.125 15:32:46 -- nvmf/common.sh@156 -- # true 00:09:45.125 15:32:46 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:45.125 15:32:46 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:45.125 Cannot find device "nvmf_tgt_br" 00:09:45.125 15:32:46 -- nvmf/common.sh@158 -- # true 00:09:45.125 15:32:46 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:45.125 Cannot find device "nvmf_tgt_br2" 00:09:45.125 15:32:46 -- nvmf/common.sh@159 -- # true 00:09:45.125 15:32:46 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:45.125 15:32:46 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:45.125 15:32:46 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:45.125 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:45.125 15:32:46 -- nvmf/common.sh@162 -- # true 00:09:45.125 15:32:46 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:45.125 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:45.125 15:32:46 -- nvmf/common.sh@163 -- # true 00:09:45.125 15:32:46 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:45.125 15:32:46 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:45.125 15:32:46 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:45.126 15:32:46 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:45.126 15:32:46 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:45.126 15:32:46 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:45.126 15:32:46 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:45.126 15:32:46 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:45.126 15:32:46 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:45.126 15:32:46 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:45.126 15:32:46 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:45.126 15:32:46 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:45.126 15:32:46 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:45.126 15:32:46 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:45.126 15:32:46 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:45.126 15:32:46 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:45.385 15:32:46 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:45.385 15:32:46 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:45.385 15:32:46 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:45.385 15:32:46 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:45.385 15:32:46 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:45.385 15:32:46 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:45.385 15:32:46 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:45.385 15:32:46 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:45.385 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:45.385 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:09:45.385 00:09:45.385 --- 10.0.0.2 ping statistics --- 00:09:45.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.385 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:09:45.385 15:32:46 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:45.385 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:45.385 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:09:45.385 00:09:45.385 --- 10.0.0.3 ping statistics --- 00:09:45.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.385 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:09:45.385 15:32:46 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:45.385 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:45.385 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:09:45.385 00:09:45.385 --- 10.0.0.1 ping statistics --- 00:09:45.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.385 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:09:45.385 15:32:46 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:45.385 15:32:46 -- nvmf/common.sh@422 -- # return 0 00:09:45.385 15:32:46 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:45.385 15:32:46 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:45.385 15:32:46 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:45.385 15:32:46 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:45.385 15:32:46 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:45.385 15:32:46 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:45.385 15:32:46 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:45.385 15:32:46 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:45.385 15:32:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:45.385 15:32:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:45.385 15:32:46 -- common/autotest_common.sh@10 -- # set +x 00:09:45.385 15:32:46 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:45.385 15:32:46 -- nvmf/common.sh@470 -- # nvmfpid=68267 00:09:45.385 15:32:46 -- nvmf/common.sh@471 -- # waitforlisten 68267 00:09:45.385 15:32:46 -- common/autotest_common.sh@817 -- # '[' -z 68267 ']' 00:09:45.385 15:32:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.385 15:32:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:45.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.385 15:32:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.385 15:32:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:45.385 15:32:46 -- common/autotest_common.sh@10 -- # set +x 00:09:45.385 [2024-04-17 15:32:46.745002] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:09:45.385 [2024-04-17 15:32:46.745129] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:45.644 [2024-04-17 15:32:46.881416] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:45.644 [2024-04-17 15:32:47.031408] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:45.644 [2024-04-17 15:32:47.031779] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:45.644 [2024-04-17 15:32:47.031893] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:45.644 [2024-04-17 15:32:47.032066] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:45.644 [2024-04-17 15:32:47.032203] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:45.644 [2024-04-17 15:32:47.032448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:45.644 [2024-04-17 15:32:47.035816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:45.644 [2024-04-17 15:32:47.037830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:45.644 [2024-04-17 15:32:47.037843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.581 15:32:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:46.581 15:32:47 -- common/autotest_common.sh@850 -- # return 0 00:09:46.581 15:32:47 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:46.581 15:32:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:46.581 15:32:47 -- common/autotest_common.sh@10 -- # set +x 00:09:46.581 15:32:47 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:46.581 15:32:47 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:46.581 15:32:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:46.581 15:32:47 -- common/autotest_common.sh@10 -- # set +x 00:09:46.581 [2024-04-17 15:32:47.716406] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:46.581 15:32:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:46.581 15:32:47 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:46.581 15:32:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:46.581 15:32:47 -- common/autotest_common.sh@10 -- # set +x 00:09:46.581 Malloc0 00:09:46.581 15:32:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:46.581 15:32:47 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:46.581 15:32:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:46.581 15:32:47 -- common/autotest_common.sh@10 -- # set +x 00:09:46.581 15:32:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:46.581 15:32:47 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:46.581 15:32:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:46.581 15:32:47 -- common/autotest_common.sh@10 -- # set +x 00:09:46.581 15:32:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:46.581 15:32:47 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:46.581 15:32:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:46.581 15:32:47 -- common/autotest_common.sh@10 -- # set +x 00:09:46.581 [2024-04-17 15:32:47.790992] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:46.581 test case1: single bdev can't be used in multiple subsystems 00:09:46.581 15:32:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:46.581 15:32:47 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:46.582 15:32:47 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:46.582 15:32:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:46.582 15:32:47 -- common/autotest_common.sh@10 -- # set +x 00:09:46.582 15:32:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:46.582 15:32:47 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:46.582 15:32:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:46.582 15:32:47 -- common/autotest_common.sh@10 -- # set +x 00:09:46.582 15:32:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:46.582 15:32:47 -- target/nmic.sh@28 -- # nmic_status=0 00:09:46.582 15:32:47 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:46.582 15:32:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:46.582 15:32:47 -- common/autotest_common.sh@10 -- # set +x 00:09:46.582 [2024-04-17 15:32:47.814799] bdev.c:7988:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:46.582 [2024-04-17 15:32:47.814853] subsystem.c:1930:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:46.582 [2024-04-17 15:32:47.814865] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.582 request: 00:09:46.582 { 00:09:46.582 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:46.582 "namespace": { 00:09:46.582 "bdev_name": "Malloc0", 00:09:46.582 "no_auto_visible": false 00:09:46.582 }, 00:09:46.582 "method": "nvmf_subsystem_add_ns", 00:09:46.582 "req_id": 1 00:09:46.582 } 00:09:46.582 Got JSON-RPC error response 00:09:46.582 response: 00:09:46.582 { 00:09:46.582 "code": -32602, 00:09:46.582 "message": "Invalid parameters" 00:09:46.582 } 00:09:46.582 Adding namespace failed - expected result. 00:09:46.582 test case2: host connect to nvmf target in multiple paths 00:09:46.582 15:32:47 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:09:46.582 15:32:47 -- target/nmic.sh@29 -- # nmic_status=1 00:09:46.582 15:32:47 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:46.582 15:32:47 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:46.582 15:32:47 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:46.582 15:32:47 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:46.582 15:32:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:46.582 15:32:47 -- common/autotest_common.sh@10 -- # set +x 00:09:46.582 [2024-04-17 15:32:47.826926] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:46.582 15:32:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:46.582 15:32:47 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02dfa913-00e4-4a25-ab2c-855f7283d4db --hostid=02dfa913-00e4-4a25-ab2c-855f7283d4db -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:46.582 15:32:47 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02dfa913-00e4-4a25-ab2c-855f7283d4db --hostid=02dfa913-00e4-4a25-ab2c-855f7283d4db -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:46.841 15:32:48 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:46.841 15:32:48 -- common/autotest_common.sh@1184 -- # local i=0 00:09:46.841 15:32:48 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:46.841 15:32:48 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:46.841 15:32:48 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:48.744 15:32:50 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:48.744 15:32:50 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:48.744 15:32:50 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:48.744 15:32:50 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:48.744 15:32:50 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:48.744 15:32:50 -- common/autotest_common.sh@1194 -- # return 0 00:09:48.744 15:32:50 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:48.744 [global] 00:09:48.744 thread=1 00:09:48.744 invalidate=1 00:09:48.744 rw=write 00:09:48.744 time_based=1 00:09:48.744 runtime=1 00:09:48.744 ioengine=libaio 00:09:48.744 direct=1 00:09:48.744 bs=4096 00:09:48.744 iodepth=1 00:09:48.744 norandommap=0 00:09:48.744 numjobs=1 00:09:48.744 00:09:48.744 verify_dump=1 00:09:48.744 verify_backlog=512 00:09:48.744 verify_state_save=0 00:09:48.744 do_verify=1 00:09:48.744 verify=crc32c-intel 00:09:48.744 [job0] 00:09:48.744 filename=/dev/nvme0n1 00:09:48.744 Could not set queue depth (nvme0n1) 00:09:49.003 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:49.003 fio-3.35 00:09:49.003 Starting 1 thread 00:09:50.380 00:09:50.380 job0: (groupid=0, jobs=1): err= 0: pid=68359: Wed Apr 17 15:32:51 2024 00:09:50.380 read: IOPS=2768, BW=10.8MiB/s (11.3MB/s)(10.8MiB/1001msec) 00:09:50.380 slat (nsec): min=11230, max=50171, avg=14156.61, stdev=3763.25 00:09:50.380 clat (usec): min=130, max=2250, avg=182.34, stdev=70.09 00:09:50.380 lat (usec): min=144, max=2272, avg=196.50, stdev=70.49 00:09:50.380 clat percentiles (usec): 00:09:50.380 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 157], 00:09:50.380 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 174], 60.00th=[ 180], 00:09:50.380 | 70.00th=[ 190], 80.00th=[ 200], 90.00th=[ 217], 95.00th=[ 235], 00:09:50.380 | 99.00th=[ 273], 99.50th=[ 293], 99.90th=[ 1582], 99.95th=[ 1844], 00:09:50.380 | 99.99th=[ 2245] 00:09:50.380 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:50.380 slat (usec): min=15, max=117, avg=22.16, stdev= 7.32 00:09:50.380 clat (usec): min=81, max=7296, avg=122.94, stdev=183.80 00:09:50.380 lat (usec): min=98, max=7334, avg=145.10, stdev=184.77 00:09:50.380 clat percentiles (usec): 00:09:50.380 | 1.00th=[ 86], 5.00th=[ 90], 10.00th=[ 93], 20.00th=[ 98], 00:09:50.380 | 30.00th=[ 102], 40.00th=[ 106], 50.00th=[ 113], 60.00th=[ 119], 00:09:50.380 | 70.00th=[ 125], 80.00th=[ 133], 90.00th=[ 147], 95.00th=[ 161], 00:09:50.380 | 99.00th=[ 184], 99.50th=[ 204], 99.90th=[ 2671], 99.95th=[ 5342], 00:09:50.380 | 99.99th=[ 7308] 00:09:50.380 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:09:50.380 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:50.380 lat (usec) : 100=13.88%, 250=84.92%, 500=1.01%, 750=0.02%, 1000=0.02% 00:09:50.380 lat (msec) : 2=0.07%, 4=0.03%, 10=0.05% 00:09:50.380 cpu : usr=2.80%, sys=7.60%, ctx=5845, majf=0, minf=2 00:09:50.380 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:50.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.380 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.380 issued rwts: total=2771,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.380 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:50.380 00:09:50.380 Run status group 0 (all jobs): 00:09:50.380 READ: bw=10.8MiB/s (11.3MB/s), 10.8MiB/s-10.8MiB/s (11.3MB/s-11.3MB/s), io=10.8MiB (11.3MB), run=1001-1001msec 00:09:50.380 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:09:50.380 00:09:50.380 Disk stats (read/write): 00:09:50.380 nvme0n1: ios=2604/2560, merge=0/0, ticks=509/354, in_queue=863, util=89.65% 00:09:50.380 15:32:51 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:50.380 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:50.380 15:32:51 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:50.380 15:32:51 -- common/autotest_common.sh@1205 -- # local i=0 00:09:50.380 15:32:51 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:50.380 15:32:51 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:50.380 15:32:51 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:50.381 15:32:51 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:50.381 15:32:51 -- common/autotest_common.sh@1217 -- # return 0 00:09:50.381 15:32:51 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:50.381 15:32:51 -- target/nmic.sh@53 -- # nvmftestfini 00:09:50.381 15:32:51 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:50.381 15:32:51 -- nvmf/common.sh@117 -- # sync 00:09:50.381 15:32:51 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:50.381 15:32:51 -- nvmf/common.sh@120 -- # set +e 00:09:50.381 15:32:51 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:50.381 15:32:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:50.381 rmmod nvme_tcp 00:09:50.381 rmmod nvme_fabrics 00:09:50.381 rmmod nvme_keyring 00:09:50.381 15:32:51 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:50.381 15:32:51 -- nvmf/common.sh@124 -- # set -e 00:09:50.381 15:32:51 -- nvmf/common.sh@125 -- # return 0 00:09:50.381 15:32:51 -- nvmf/common.sh@478 -- # '[' -n 68267 ']' 00:09:50.381 15:32:51 -- nvmf/common.sh@479 -- # killprocess 68267 00:09:50.381 15:32:51 -- common/autotest_common.sh@936 -- # '[' -z 68267 ']' 00:09:50.381 15:32:51 -- common/autotest_common.sh@940 -- # kill -0 68267 00:09:50.381 15:32:51 -- common/autotest_common.sh@941 -- # uname 00:09:50.381 15:32:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:50.381 15:32:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68267 00:09:50.381 killing process with pid 68267 00:09:50.381 15:32:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:50.381 15:32:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:50.381 15:32:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68267' 00:09:50.381 15:32:51 -- common/autotest_common.sh@955 -- # kill 68267 00:09:50.381 15:32:51 -- common/autotest_common.sh@960 -- # wait 68267 00:09:50.640 15:32:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:50.640 15:32:51 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:50.640 15:32:51 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:50.640 15:32:51 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:50.640 15:32:51 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:50.640 15:32:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.640 15:32:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:50.640 15:32:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.640 15:32:52 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:50.640 ************************************ 00:09:50.640 END TEST nvmf_nmic 00:09:50.640 ************************************ 00:09:50.640 00:09:50.640 real 0m5.832s 00:09:50.640 user 0m18.131s 00:09:50.640 sys 0m2.325s 00:09:50.640 15:32:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:50.640 15:32:52 -- common/autotest_common.sh@10 -- # set +x 00:09:50.640 15:32:52 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:50.640 15:32:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:50.640 15:32:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:50.640 15:32:52 -- common/autotest_common.sh@10 -- # set +x 00:09:50.899 ************************************ 00:09:50.899 START TEST nvmf_fio_target 00:09:50.899 ************************************ 00:09:50.899 15:32:52 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:50.899 * Looking for test storage... 00:09:50.899 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:50.899 15:32:52 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:50.899 15:32:52 -- nvmf/common.sh@7 -- # uname -s 00:09:50.899 15:32:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:50.899 15:32:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:50.899 15:32:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:50.899 15:32:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:50.899 15:32:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:50.899 15:32:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:50.899 15:32:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:50.899 15:32:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:50.899 15:32:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:50.899 15:32:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:50.899 15:32:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02dfa913-00e4-4a25-ab2c-855f7283d4db 00:09:50.899 15:32:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=02dfa913-00e4-4a25-ab2c-855f7283d4db 00:09:50.899 15:32:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:50.899 15:32:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:50.899 15:32:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:50.899 15:32:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:50.899 15:32:52 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:50.899 15:32:52 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.899 15:32:52 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.899 15:32:52 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.899 15:32:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.899 15:32:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.899 15:32:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.899 15:32:52 -- paths/export.sh@5 -- # export PATH 00:09:50.899 15:32:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.899 15:32:52 -- nvmf/common.sh@47 -- # : 0 00:09:50.899 15:32:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:50.899 15:32:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:50.899 15:32:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:50.899 15:32:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:50.899 15:32:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:50.899 15:32:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:50.899 15:32:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:50.899 15:32:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:50.899 15:32:52 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:50.899 15:32:52 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:50.899 15:32:52 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:50.899 15:32:52 -- target/fio.sh@16 -- # nvmftestinit 00:09:50.899 15:32:52 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:50.899 15:32:52 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:50.899 15:32:52 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:50.899 15:32:52 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:50.899 15:32:52 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:50.899 15:32:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.899 15:32:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:50.899 15:32:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.899 15:32:52 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:09:50.899 15:32:52 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:09:50.899 15:32:52 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:09:50.899 15:32:52 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:09:50.899 15:32:52 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:09:50.900 15:32:52 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:09:50.900 15:32:52 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:50.900 15:32:52 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:50.900 15:32:52 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:50.900 15:32:52 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:50.900 15:32:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:50.900 15:32:52 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:50.900 15:32:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:50.900 15:32:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:50.900 15:32:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:50.900 15:32:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:50.900 15:32:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:50.900 15:32:52 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:50.900 15:32:52 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:50.900 15:32:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:50.900 Cannot find device "nvmf_tgt_br" 00:09:50.900 15:32:52 -- nvmf/common.sh@155 -- # true 00:09:50.900 15:32:52 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:50.900 Cannot find device "nvmf_tgt_br2" 00:09:50.900 15:32:52 -- nvmf/common.sh@156 -- # true 00:09:50.900 15:32:52 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:50.900 15:32:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:50.900 Cannot find device "nvmf_tgt_br" 00:09:50.900 15:32:52 -- nvmf/common.sh@158 -- # true 00:09:50.900 15:32:52 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:50.900 Cannot find device "nvmf_tgt_br2" 00:09:50.900 15:32:52 -- nvmf/common.sh@159 -- # true 00:09:50.900 15:32:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:50.900 15:32:52 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:50.900 15:32:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:50.900 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:50.900 15:32:52 -- nvmf/common.sh@162 -- # true 00:09:50.900 15:32:52 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:51.159 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:51.159 15:32:52 -- nvmf/common.sh@163 -- # true 00:09:51.159 15:32:52 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:51.159 15:32:52 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:51.159 15:32:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:51.159 15:32:52 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:51.159 15:32:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:51.159 15:32:52 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:51.159 15:32:52 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:51.159 15:32:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:51.159 15:32:52 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:51.159 15:32:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:51.159 15:32:52 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:51.159 15:32:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:51.159 15:32:52 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:51.159 15:32:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:51.159 15:32:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:51.159 15:32:52 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:51.159 15:32:52 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:51.159 15:32:52 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:51.159 15:32:52 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:51.159 15:32:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:51.159 15:32:52 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:51.159 15:32:52 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:51.159 15:32:52 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:51.159 15:32:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:51.159 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:51.159 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:09:51.159 00:09:51.159 --- 10.0.0.2 ping statistics --- 00:09:51.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.159 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:09:51.159 15:32:52 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:51.159 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:51.159 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:09:51.159 00:09:51.159 --- 10.0.0.3 ping statistics --- 00:09:51.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.159 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:09:51.159 15:32:52 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:51.159 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:51.159 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:09:51.159 00:09:51.159 --- 10.0.0.1 ping statistics --- 00:09:51.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.159 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:09:51.159 15:32:52 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:51.159 15:32:52 -- nvmf/common.sh@422 -- # return 0 00:09:51.159 15:32:52 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:51.159 15:32:52 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:51.159 15:32:52 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:51.159 15:32:52 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:51.160 15:32:52 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:51.160 15:32:52 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:51.160 15:32:52 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:51.160 15:32:52 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:51.160 15:32:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:51.160 15:32:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:51.160 15:32:52 -- common/autotest_common.sh@10 -- # set +x 00:09:51.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.160 15:32:52 -- nvmf/common.sh@470 -- # nvmfpid=68545 00:09:51.160 15:32:52 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:51.160 15:32:52 -- nvmf/common.sh@471 -- # waitforlisten 68545 00:09:51.160 15:32:52 -- common/autotest_common.sh@817 -- # '[' -z 68545 ']' 00:09:51.160 15:32:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.160 15:32:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:51.160 15:32:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.160 15:32:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:51.160 15:32:52 -- common/autotest_common.sh@10 -- # set +x 00:09:51.419 [2024-04-17 15:32:52.611648] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:09:51.419 [2024-04-17 15:32:52.611747] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:51.419 [2024-04-17 15:32:52.756804] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:51.677 [2024-04-17 15:32:52.894427] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:51.677 [2024-04-17 15:32:52.894793] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:51.677 [2024-04-17 15:32:52.894956] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:51.677 [2024-04-17 15:32:52.895171] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:51.678 [2024-04-17 15:32:52.895304] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:51.678 [2024-04-17 15:32:52.895568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:51.678 [2024-04-17 15:32:52.895771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:51.678 [2024-04-17 15:32:52.895770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.678 [2024-04-17 15:32:52.895639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:52.244 15:32:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:52.244 15:32:53 -- common/autotest_common.sh@850 -- # return 0 00:09:52.244 15:32:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:52.244 15:32:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:52.244 15:32:53 -- common/autotest_common.sh@10 -- # set +x 00:09:52.244 15:32:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:52.244 15:32:53 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:52.503 [2024-04-17 15:32:53.877880] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:52.503 15:32:53 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:52.761 15:32:54 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:52.761 15:32:54 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:53.025 15:32:54 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:53.025 15:32:54 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:53.289 15:32:54 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:53.289 15:32:54 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:53.548 15:32:54 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:53.548 15:32:54 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:53.807 15:32:55 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:54.064 15:32:55 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:54.064 15:32:55 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:54.322 15:32:55 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:54.322 15:32:55 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:54.581 15:32:55 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:54.581 15:32:55 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:54.841 15:32:56 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:55.100 15:32:56 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:55.100 15:32:56 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:55.359 15:32:56 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:55.359 15:32:56 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:55.617 15:32:56 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:55.875 [2024-04-17 15:32:57.166413] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:55.875 15:32:57 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:56.132 15:32:57 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:56.391 15:32:57 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02dfa913-00e4-4a25-ab2c-855f7283d4db --hostid=02dfa913-00e4-4a25-ab2c-855f7283d4db -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:56.391 15:32:57 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:56.391 15:32:57 -- common/autotest_common.sh@1184 -- # local i=0 00:09:56.391 15:32:57 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:56.391 15:32:57 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:09:56.391 15:32:57 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:09:56.391 15:32:57 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:58.955 15:32:59 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:58.955 15:32:59 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:58.955 15:32:59 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:58.955 15:32:59 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:09:58.955 15:32:59 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:58.955 15:32:59 -- common/autotest_common.sh@1194 -- # return 0 00:09:58.955 15:32:59 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:58.955 [global] 00:09:58.955 thread=1 00:09:58.955 invalidate=1 00:09:58.955 rw=write 00:09:58.955 time_based=1 00:09:58.955 runtime=1 00:09:58.955 ioengine=libaio 00:09:58.955 direct=1 00:09:58.955 bs=4096 00:09:58.955 iodepth=1 00:09:58.955 norandommap=0 00:09:58.955 numjobs=1 00:09:58.955 00:09:58.955 verify_dump=1 00:09:58.955 verify_backlog=512 00:09:58.955 verify_state_save=0 00:09:58.955 do_verify=1 00:09:58.955 verify=crc32c-intel 00:09:58.955 [job0] 00:09:58.955 filename=/dev/nvme0n1 00:09:58.955 [job1] 00:09:58.955 filename=/dev/nvme0n2 00:09:58.955 [job2] 00:09:58.955 filename=/dev/nvme0n3 00:09:58.955 [job3] 00:09:58.955 filename=/dev/nvme0n4 00:09:58.955 Could not set queue depth (nvme0n1) 00:09:58.955 Could not set queue depth (nvme0n2) 00:09:58.955 Could not set queue depth (nvme0n3) 00:09:58.955 Could not set queue depth (nvme0n4) 00:09:58.955 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:58.955 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:58.955 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:58.955 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:58.955 fio-3.35 00:09:58.955 Starting 4 threads 00:09:59.914 00:09:59.914 job0: (groupid=0, jobs=1): err= 0: pid=68725: Wed Apr 17 15:33:01 2024 00:09:59.914 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:09:59.914 slat (nsec): min=12077, max=34726, avg=15604.29, stdev=2022.78 00:09:59.914 clat (usec): min=143, max=644, avg=242.56, stdev=51.37 00:09:59.914 lat (usec): min=159, max=661, avg=258.17, stdev=50.92 00:09:59.914 clat percentiles (usec): 00:09:59.914 | 1.00th=[ 153], 5.00th=[ 165], 10.00th=[ 172], 20.00th=[ 186], 00:09:59.914 | 30.00th=[ 231], 40.00th=[ 239], 50.00th=[ 245], 60.00th=[ 251], 00:09:59.914 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 330], 95.00th=[ 343], 00:09:59.914 | 99.00th=[ 363], 99.50th=[ 371], 99.90th=[ 404], 99.95th=[ 404], 00:09:59.914 | 99.99th=[ 644] 00:09:59.914 write: IOPS=2475, BW=9902KiB/s (10.1MB/s)(9912KiB/1001msec); 0 zone resets 00:09:59.914 slat (usec): min=11, max=123, avg=20.32, stdev= 5.81 00:09:59.914 clat (usec): min=96, max=291, avg=166.61, stdev=32.40 00:09:59.914 lat (usec): min=119, max=414, avg=186.94, stdev=30.03 00:09:59.914 clat percentiles (usec): 00:09:59.914 | 1.00th=[ 112], 5.00th=[ 120], 10.00th=[ 125], 20.00th=[ 133], 00:09:59.914 | 30.00th=[ 141], 40.00th=[ 153], 50.00th=[ 174], 60.00th=[ 184], 00:09:59.914 | 70.00th=[ 190], 80.00th=[ 196], 90.00th=[ 206], 95.00th=[ 215], 00:09:59.914 | 99.00th=[ 233], 99.50th=[ 243], 99.90th=[ 269], 99.95th=[ 273], 00:09:59.914 | 99.99th=[ 293] 00:09:59.914 bw ( KiB/s): min=11168, max=11168, per=31.50%, avg=11168.00, stdev= 0.00, samples=1 00:09:59.914 iops : min= 2792, max= 2792, avg=2792.00, stdev= 0.00, samples=1 00:09:59.914 lat (usec) : 100=0.07%, 250=81.33%, 500=18.58%, 750=0.02% 00:09:59.914 cpu : usr=1.50%, sys=7.00%, ctx=4526, majf=0, minf=17 00:09:59.914 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.914 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.914 issued rwts: total=2048,2478,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.914 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.914 job1: (groupid=0, jobs=1): err= 0: pid=68726: Wed Apr 17 15:33:01 2024 00:09:59.914 read: IOPS=1931, BW=7724KiB/s (7910kB/s)(7732KiB/1001msec) 00:09:59.914 slat (nsec): min=12480, max=40242, avg=14974.37, stdev=2733.01 00:09:59.914 clat (usec): min=170, max=2380, avg=272.53, stdev=63.94 00:09:59.914 lat (usec): min=185, max=2400, avg=287.50, stdev=64.52 00:09:59.914 clat percentiles (usec): 00:09:59.914 | 1.00th=[ 235], 5.00th=[ 243], 10.00th=[ 247], 20.00th=[ 253], 00:09:59.914 | 30.00th=[ 258], 40.00th=[ 262], 50.00th=[ 265], 60.00th=[ 269], 00:09:59.914 | 70.00th=[ 277], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[ 306], 00:09:59.914 | 99.00th=[ 490], 99.50th=[ 519], 99.90th=[ 1270], 99.95th=[ 2376], 00:09:59.914 | 99.99th=[ 2376] 00:09:59.914 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:59.914 slat (usec): min=17, max=122, avg=22.59, stdev= 5.56 00:09:59.914 clat (usec): min=95, max=643, avg=190.87, stdev=25.68 00:09:59.914 lat (usec): min=114, max=668, avg=213.46, stdev=27.36 00:09:59.914 clat percentiles (usec): 00:09:59.914 | 1.00th=[ 109], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 178], 00:09:59.914 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 194], 00:09:59.914 | 70.00th=[ 198], 80.00th=[ 204], 90.00th=[ 217], 95.00th=[ 223], 00:09:59.914 | 99.00th=[ 249], 99.50th=[ 255], 99.90th=[ 453], 99.95th=[ 461], 00:09:59.914 | 99.99th=[ 644] 00:09:59.914 bw ( KiB/s): min= 8192, max= 8192, per=23.11%, avg=8192.00, stdev= 0.00, samples=1 00:09:59.914 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:59.914 lat (usec) : 100=0.23%, 250=58.75%, 500=40.64%, 750=0.28%, 1000=0.05% 00:09:59.914 lat (msec) : 2=0.03%, 4=0.03% 00:09:59.914 cpu : usr=1.90%, sys=5.50%, ctx=3981, majf=0, minf=11 00:09:59.914 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.914 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.914 issued rwts: total=1933,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.914 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.914 job2: (groupid=0, jobs=1): err= 0: pid=68727: Wed Apr 17 15:33:01 2024 00:09:59.914 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:09:59.914 slat (nsec): min=9100, max=49091, avg=13548.38, stdev=4240.23 00:09:59.914 clat (usec): min=150, max=4084, avg=255.68, stdev=137.60 00:09:59.914 lat (usec): min=165, max=4099, avg=269.23, stdev=137.58 00:09:59.914 clat percentiles (usec): 00:09:59.914 | 1.00th=[ 163], 5.00th=[ 174], 10.00th=[ 180], 20.00th=[ 200], 00:09:59.914 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 251], 60.00th=[ 255], 00:09:59.914 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 314], 95.00th=[ 330], 00:09:59.914 | 99.00th=[ 359], 99.50th=[ 693], 99.90th=[ 1926], 99.95th=[ 3458], 00:09:59.915 | 99.99th=[ 4080] 00:09:59.915 write: IOPS=2294, BW=9179KiB/s (9399kB/s)(9188KiB/1001msec); 0 zone resets 00:09:59.915 slat (usec): min=12, max=100, avg=22.02, stdev= 4.49 00:09:59.915 clat (usec): min=112, max=304, avg=169.86, stdev=26.31 00:09:59.915 lat (usec): min=137, max=404, avg=191.88, stdev=26.19 00:09:59.915 clat percentiles (usec): 00:09:59.915 | 1.00th=[ 121], 5.00th=[ 128], 10.00th=[ 133], 20.00th=[ 145], 00:09:59.915 | 30.00th=[ 153], 40.00th=[ 165], 50.00th=[ 174], 60.00th=[ 180], 00:09:59.915 | 70.00th=[ 186], 80.00th=[ 192], 90.00th=[ 202], 95.00th=[ 210], 00:09:59.915 | 99.00th=[ 227], 99.50th=[ 235], 99.90th=[ 253], 99.95th=[ 253], 00:09:59.915 | 99.99th=[ 306] 00:09:59.915 bw ( KiB/s): min= 9728, max= 9728, per=27.44%, avg=9728.00, stdev= 0.00, samples=1 00:09:59.915 iops : min= 2432, max= 2432, avg=2432.00, stdev= 0.00, samples=1 00:09:59.915 lat (usec) : 250=75.33%, 500=24.33%, 750=0.12%, 1000=0.12% 00:09:59.915 lat (msec) : 2=0.07%, 4=0.02%, 10=0.02% 00:09:59.915 cpu : usr=2.20%, sys=6.00%, ctx=4347, majf=0, minf=6 00:09:59.915 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.915 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.915 issued rwts: total=2048,2297,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.915 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.915 job3: (groupid=0, jobs=1): err= 0: pid=68728: Wed Apr 17 15:33:01 2024 00:09:59.915 read: IOPS=1940, BW=7760KiB/s (7946kB/s)(7768KiB/1001msec) 00:09:59.915 slat (nsec): min=13416, max=38746, avg=15537.05, stdev=2187.22 00:09:59.915 clat (usec): min=165, max=753, avg=267.81, stdev=30.08 00:09:59.915 lat (usec): min=180, max=767, avg=283.34, stdev=30.14 00:09:59.915 clat percentiles (usec): 00:09:59.915 | 1.00th=[ 217], 5.00th=[ 241], 10.00th=[ 245], 20.00th=[ 251], 00:09:59.915 | 30.00th=[ 258], 40.00th=[ 262], 50.00th=[ 265], 60.00th=[ 269], 00:09:59.915 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[ 297], 00:09:59.915 | 99.00th=[ 388], 99.50th=[ 482], 99.90th=[ 523], 99.95th=[ 750], 00:09:59.915 | 99.99th=[ 750] 00:09:59.915 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:59.915 slat (nsec): min=17837, max=92796, avg=22745.60, stdev=4442.50 00:09:59.915 clat (usec): min=106, max=698, avg=193.49, stdev=28.40 00:09:59.915 lat (usec): min=126, max=723, avg=216.24, stdev=29.99 00:09:59.915 clat percentiles (usec): 00:09:59.915 | 1.00th=[ 128], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 180], 00:09:59.915 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 194], 00:09:59.915 | 70.00th=[ 200], 80.00th=[ 206], 90.00th=[ 215], 95.00th=[ 225], 00:09:59.915 | 99.00th=[ 334], 99.50th=[ 347], 99.90th=[ 363], 99.95th=[ 371], 00:09:59.915 | 99.99th=[ 701] 00:09:59.915 bw ( KiB/s): min= 8192, max= 8192, per=23.11%, avg=8192.00, stdev= 0.00, samples=1 00:09:59.915 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:59.915 lat (usec) : 250=59.10%, 500=40.73%, 750=0.15%, 1000=0.03% 00:09:59.915 cpu : usr=1.80%, sys=5.70%, ctx=3990, majf=0, minf=9 00:09:59.915 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.915 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.915 issued rwts: total=1942,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.915 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.915 00:09:59.915 Run status group 0 (all jobs): 00:09:59.915 READ: bw=31.1MiB/s (32.6MB/s), 7724KiB/s-8184KiB/s (7910kB/s-8380kB/s), io=31.1MiB (32.6MB), run=1001-1001msec 00:09:59.915 WRITE: bw=34.6MiB/s (36.3MB/s), 8184KiB/s-9902KiB/s (8380kB/s-10.1MB/s), io=34.7MiB (36.3MB), run=1001-1001msec 00:09:59.915 00:09:59.915 Disk stats (read/write): 00:09:59.915 nvme0n1: ios=1885/2048, merge=0/0, ticks=480/321, in_queue=801, util=88.18% 00:09:59.915 nvme0n2: ios=1584/1898, merge=0/0, ticks=470/384, in_queue=854, util=88.45% 00:09:59.915 nvme0n3: ios=1745/2048, merge=0/0, ticks=404/358, in_queue=762, util=88.51% 00:09:59.915 nvme0n4: ios=1536/1916, merge=0/0, ticks=427/388, in_queue=815, util=89.58% 00:09:59.915 15:33:01 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:59.915 [global] 00:09:59.915 thread=1 00:09:59.915 invalidate=1 00:09:59.915 rw=randwrite 00:09:59.915 time_based=1 00:09:59.915 runtime=1 00:09:59.915 ioengine=libaio 00:09:59.915 direct=1 00:09:59.915 bs=4096 00:09:59.915 iodepth=1 00:09:59.915 norandommap=0 00:09:59.915 numjobs=1 00:09:59.915 00:09:59.915 verify_dump=1 00:09:59.915 verify_backlog=512 00:09:59.915 verify_state_save=0 00:09:59.915 do_verify=1 00:09:59.915 verify=crc32c-intel 00:09:59.915 [job0] 00:09:59.915 filename=/dev/nvme0n1 00:09:59.915 [job1] 00:09:59.915 filename=/dev/nvme0n2 00:09:59.915 [job2] 00:09:59.915 filename=/dev/nvme0n3 00:09:59.915 [job3] 00:09:59.915 filename=/dev/nvme0n4 00:09:59.915 Could not set queue depth (nvme0n1) 00:09:59.915 Could not set queue depth (nvme0n2) 00:09:59.915 Could not set queue depth (nvme0n3) 00:09:59.915 Could not set queue depth (nvme0n4) 00:10:00.173 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:00.173 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:00.173 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:00.173 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:00.173 fio-3.35 00:10:00.173 Starting 4 threads 00:10:01.109 00:10:01.109 job0: (groupid=0, jobs=1): err= 0: pid=68787: Wed Apr 17 15:33:02 2024 00:10:01.109 read: IOPS=3052, BW=11.9MiB/s (12.5MB/s)(11.9MiB/1001msec) 00:10:01.109 slat (usec): min=11, max=410, avg=13.98, stdev= 7.38 00:10:01.109 clat (usec): min=3, max=1941, avg=165.23, stdev=61.62 00:10:01.109 lat (usec): min=148, max=1962, avg=179.21, stdev=62.21 00:10:01.109 clat percentiles (usec): 00:10:01.109 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 151], 00:10:01.109 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 163], 00:10:01.109 | 70.00th=[ 167], 80.00th=[ 172], 90.00th=[ 178], 95.00th=[ 184], 00:10:01.109 | 99.00th=[ 212], 99.50th=[ 371], 99.90th=[ 1045], 99.95th=[ 1876], 00:10:01.109 | 99.99th=[ 1942] 00:10:01.109 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:01.109 slat (nsec): min=14551, max=78233, avg=20634.54, stdev=3292.10 00:10:01.109 clat (usec): min=92, max=1705, avg=123.00, stdev=31.58 00:10:01.109 lat (usec): min=111, max=1725, avg=143.64, stdev=31.77 00:10:01.109 clat percentiles (usec): 00:10:01.109 | 1.00th=[ 97], 5.00th=[ 104], 10.00th=[ 109], 20.00th=[ 114], 00:10:01.109 | 30.00th=[ 117], 40.00th=[ 120], 50.00th=[ 122], 60.00th=[ 125], 00:10:01.109 | 70.00th=[ 128], 80.00th=[ 131], 90.00th=[ 137], 95.00th=[ 143], 00:10:01.109 | 99.00th=[ 153], 99.50th=[ 159], 99.90th=[ 293], 99.95th=[ 371], 00:10:01.109 | 99.99th=[ 1713] 00:10:01.109 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:10:01.109 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:01.109 lat (usec) : 4=0.02%, 100=1.14%, 250=98.37%, 500=0.31%, 750=0.02% 00:10:01.109 lat (usec) : 1000=0.05% 00:10:01.109 lat (msec) : 2=0.10% 00:10:01.109 cpu : usr=1.70%, sys=9.00%, ctx=6128, majf=0, minf=7 00:10:01.109 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:01.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.109 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.109 issued rwts: total=3056,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.109 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:01.109 job1: (groupid=0, jobs=1): err= 0: pid=68788: Wed Apr 17 15:33:02 2024 00:10:01.109 read: IOPS=1714, BW=6857KiB/s (7022kB/s)(6864KiB/1001msec) 00:10:01.109 slat (nsec): min=12672, max=57293, avg=15811.68, stdev=3087.21 00:10:01.109 clat (usec): min=142, max=1864, avg=291.11, stdev=63.22 00:10:01.109 lat (usec): min=156, max=1880, avg=306.92, stdev=64.41 00:10:01.109 clat percentiles (usec): 00:10:01.109 | 1.00th=[ 155], 5.00th=[ 253], 10.00th=[ 265], 20.00th=[ 269], 00:10:01.109 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 289], 00:10:01.109 | 70.00th=[ 297], 80.00th=[ 306], 90.00th=[ 318], 95.00th=[ 343], 00:10:01.109 | 99.00th=[ 529], 99.50th=[ 545], 99.90th=[ 562], 99.95th=[ 1860], 00:10:01.109 | 99.99th=[ 1860] 00:10:01.109 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:01.109 slat (usec): min=16, max=131, avg=21.71, stdev= 4.56 00:10:01.109 clat (usec): min=96, max=823, avg=205.96, stdev=30.71 00:10:01.109 lat (usec): min=119, max=845, avg=227.67, stdev=31.78 00:10:01.109 clat percentiles (usec): 00:10:01.109 | 1.00th=[ 115], 5.00th=[ 135], 10.00th=[ 188], 20.00th=[ 198], 00:10:01.109 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 208], 60.00th=[ 212], 00:10:01.109 | 70.00th=[ 217], 80.00th=[ 221], 90.00th=[ 233], 95.00th=[ 241], 00:10:01.109 | 99.00th=[ 258], 99.50th=[ 265], 99.90th=[ 441], 99.95th=[ 445], 00:10:01.109 | 99.99th=[ 824] 00:10:01.109 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:10:01.109 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:01.109 lat (usec) : 100=0.03%, 250=55.31%, 500=43.52%, 750=1.09%, 1000=0.03% 00:10:01.109 lat (msec) : 2=0.03% 00:10:01.109 cpu : usr=1.90%, sys=5.10%, ctx=3764, majf=0, minf=11 00:10:01.110 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:01.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.110 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.110 issued rwts: total=1716,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.110 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:01.110 job2: (groupid=0, jobs=1): err= 0: pid=68793: Wed Apr 17 15:33:02 2024 00:10:01.110 read: IOPS=1656, BW=6625KiB/s (6784kB/s)(6632KiB/1001msec) 00:10:01.110 slat (usec): min=11, max=119, avg=15.85, stdev= 4.85 00:10:01.110 clat (usec): min=155, max=7232, avg=294.63, stdev=181.80 00:10:01.110 lat (usec): min=169, max=7245, avg=310.48, stdev=181.83 00:10:01.110 clat percentiles (usec): 00:10:01.110 | 1.00th=[ 225], 5.00th=[ 260], 10.00th=[ 265], 20.00th=[ 273], 00:10:01.110 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 289], 00:10:01.110 | 70.00th=[ 297], 80.00th=[ 302], 90.00th=[ 314], 95.00th=[ 334], 00:10:01.110 | 99.00th=[ 412], 99.50th=[ 482], 99.90th=[ 2540], 99.95th=[ 7242], 00:10:01.110 | 99.99th=[ 7242] 00:10:01.110 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:01.110 slat (nsec): min=17931, max=81820, avg=23343.22, stdev=5277.92 00:10:01.110 clat (usec): min=118, max=488, avg=210.17, stdev=34.77 00:10:01.110 lat (usec): min=140, max=514, avg=233.52, stdev=37.08 00:10:01.110 clat percentiles (usec): 00:10:01.110 | 1.00th=[ 135], 5.00th=[ 155], 10.00th=[ 186], 20.00th=[ 196], 00:10:01.110 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 212], 00:10:01.110 | 70.00th=[ 217], 80.00th=[ 221], 90.00th=[ 233], 95.00th=[ 247], 00:10:01.110 | 99.00th=[ 367], 99.50th=[ 379], 99.90th=[ 429], 99.95th=[ 449], 00:10:01.110 | 99.99th=[ 490] 00:10:01.110 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:10:01.110 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:01.110 lat (usec) : 250=53.56%, 500=46.28%, 750=0.11% 00:10:01.110 lat (msec) : 4=0.03%, 10=0.03% 00:10:01.110 cpu : usr=1.50%, sys=5.70%, ctx=3722, majf=0, minf=18 00:10:01.110 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:01.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.110 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.110 issued rwts: total=1658,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.110 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:01.110 job3: (groupid=0, jobs=1): err= 0: pid=68794: Wed Apr 17 15:33:02 2024 00:10:01.110 read: IOPS=2742, BW=10.7MiB/s (11.2MB/s)(10.7MiB/1001msec) 00:10:01.110 slat (nsec): min=10637, max=36286, avg=13130.43, stdev=1941.89 00:10:01.110 clat (usec): min=145, max=1996, avg=174.56, stdev=38.92 00:10:01.110 lat (usec): min=158, max=2015, avg=187.69, stdev=39.22 00:10:01.110 clat percentiles (usec): 00:10:01.110 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 163], 00:10:01.110 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 176], 00:10:01.110 | 70.00th=[ 180], 80.00th=[ 184], 90.00th=[ 192], 95.00th=[ 198], 00:10:01.110 | 99.00th=[ 210], 99.50th=[ 221], 99.90th=[ 424], 99.95th=[ 685], 00:10:01.110 | 99.99th=[ 1991] 00:10:01.110 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:01.110 slat (nsec): min=13383, max=87943, avg=20571.73, stdev=4288.99 00:10:01.110 clat (usec): min=100, max=732, avg=133.97, stdev=17.06 00:10:01.110 lat (usec): min=118, max=754, avg=154.54, stdev=18.08 00:10:01.110 clat percentiles (usec): 00:10:01.110 | 1.00th=[ 109], 5.00th=[ 116], 10.00th=[ 120], 20.00th=[ 124], 00:10:01.110 | 30.00th=[ 127], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 137], 00:10:01.110 | 70.00th=[ 139], 80.00th=[ 143], 90.00th=[ 151], 95.00th=[ 157], 00:10:01.110 | 99.00th=[ 174], 99.50th=[ 184], 99.90th=[ 217], 99.95th=[ 273], 00:10:01.110 | 99.99th=[ 734] 00:10:01.110 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:10:01.110 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:01.110 lat (usec) : 250=99.90%, 500=0.05%, 750=0.03% 00:10:01.110 lat (msec) : 2=0.02% 00:10:01.110 cpu : usr=2.40%, sys=7.60%, ctx=5817, majf=0, minf=9 00:10:01.110 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:01.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.110 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.110 issued rwts: total=2745,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.110 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:01.110 00:10:01.110 Run status group 0 (all jobs): 00:10:01.110 READ: bw=35.8MiB/s (37.5MB/s), 6625KiB/s-11.9MiB/s (6784kB/s-12.5MB/s), io=35.8MiB (37.6MB), run=1001-1001msec 00:10:01.110 WRITE: bw=40.0MiB/s (41.9MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=40.0MiB (41.9MB), run=1001-1001msec 00:10:01.110 00:10:01.110 Disk stats (read/write): 00:10:01.110 nvme0n1: ios=2610/2769, merge=0/0, ticks=447/365, in_queue=812, util=88.28% 00:10:01.110 nvme0n2: ios=1585/1700, merge=0/0, ticks=481/366, in_queue=847, util=88.98% 00:10:01.110 nvme0n3: ios=1563/1660, merge=0/0, ticks=507/367, in_queue=874, util=90.22% 00:10:01.110 nvme0n4: ios=2468/2560, merge=0/0, ticks=447/356, in_queue=803, util=89.96% 00:10:01.110 15:33:02 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:01.369 [global] 00:10:01.369 thread=1 00:10:01.369 invalidate=1 00:10:01.369 rw=write 00:10:01.369 time_based=1 00:10:01.369 runtime=1 00:10:01.369 ioengine=libaio 00:10:01.369 direct=1 00:10:01.369 bs=4096 00:10:01.369 iodepth=128 00:10:01.369 norandommap=0 00:10:01.369 numjobs=1 00:10:01.369 00:10:01.369 verify_dump=1 00:10:01.369 verify_backlog=512 00:10:01.369 verify_state_save=0 00:10:01.369 do_verify=1 00:10:01.369 verify=crc32c-intel 00:10:01.369 [job0] 00:10:01.369 filename=/dev/nvme0n1 00:10:01.369 [job1] 00:10:01.369 filename=/dev/nvme0n2 00:10:01.369 [job2] 00:10:01.369 filename=/dev/nvme0n3 00:10:01.369 [job3] 00:10:01.369 filename=/dev/nvme0n4 00:10:01.369 Could not set queue depth (nvme0n1) 00:10:01.369 Could not set queue depth (nvme0n2) 00:10:01.369 Could not set queue depth (nvme0n3) 00:10:01.369 Could not set queue depth (nvme0n4) 00:10:01.369 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:01.369 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:01.369 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:01.369 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:01.369 fio-3.35 00:10:01.369 Starting 4 threads 00:10:02.746 00:10:02.746 job0: (groupid=0, jobs=1): err= 0: pid=68849: Wed Apr 17 15:33:03 2024 00:10:02.746 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:10:02.746 slat (usec): min=5, max=3059, avg=90.93, stdev=390.30 00:10:02.746 clat (usec): min=9368, max=14409, avg=12202.34, stdev=523.26 00:10:02.746 lat (usec): min=9937, max=14617, avg=12293.27, stdev=372.48 00:10:02.746 clat percentiles (usec): 00:10:02.746 | 1.00th=[ 9896], 5.00th=[11731], 10.00th=[11731], 20.00th=[11994], 00:10:02.746 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12256], 60.00th=[12387], 00:10:02.746 | 70.00th=[12387], 80.00th=[12518], 90.00th=[12780], 95.00th=[12780], 00:10:02.746 | 99.00th=[12911], 99.50th=[13042], 99.90th=[13042], 99.95th=[13173], 00:10:02.746 | 99.99th=[14353] 00:10:02.746 write: IOPS=5563, BW=21.7MiB/s (22.8MB/s)(21.8MiB/1001msec); 0 zone resets 00:10:02.746 slat (usec): min=11, max=2675, avg=88.83, stdev=368.09 00:10:02.746 clat (usec): min=256, max=13482, avg=11482.21, stdev=968.58 00:10:02.746 lat (usec): min=2331, max=14353, avg=11571.04, stdev=918.65 00:10:02.746 clat percentiles (usec): 00:10:02.746 | 1.00th=[ 5735], 5.00th=[10814], 10.00th=[11207], 20.00th=[11338], 00:10:02.746 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11600], 60.00th=[11731], 00:10:02.746 | 70.00th=[11863], 80.00th=[11863], 90.00th=[11994], 95.00th=[12125], 00:10:02.746 | 99.00th=[12387], 99.50th=[12387], 99.90th=[13173], 99.95th=[13304], 00:10:02.746 | 99.99th=[13435] 00:10:02.746 bw ( KiB/s): min=21306, max=21306, per=34.18%, avg=21306.00, stdev= 0.00, samples=1 00:10:02.746 iops : min= 5326, max= 5326, avg=5326.00, stdev= 0.00, samples=1 00:10:02.746 lat (usec) : 500=0.01% 00:10:02.746 lat (msec) : 4=0.30%, 10=2.65%, 20=97.04% 00:10:02.746 cpu : usr=5.10%, sys=14.30%, ctx=476, majf=0, minf=5 00:10:02.746 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:02.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.746 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:02.746 issued rwts: total=5120,5569,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.746 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:02.746 job1: (groupid=0, jobs=1): err= 0: pid=68850: Wed Apr 17 15:33:03 2024 00:10:02.746 read: IOPS=2492, BW=9968KiB/s (10.2MB/s)(9988KiB/1002msec) 00:10:02.746 slat (usec): min=8, max=5426, avg=197.71, stdev=698.37 00:10:02.746 clat (usec): min=498, max=31607, avg=24400.05, stdev=3470.03 00:10:02.746 lat (usec): min=1110, max=31620, avg=24597.76, stdev=3417.08 00:10:02.746 clat percentiles (usec): 00:10:02.746 | 1.00th=[ 4948], 5.00th=[20579], 10.00th=[21627], 20.00th=[23725], 00:10:02.746 | 30.00th=[24511], 40.00th=[25035], 50.00th=[25297], 60.00th=[25297], 00:10:02.746 | 70.00th=[25560], 80.00th=[25822], 90.00th=[26870], 95.00th=[27395], 00:10:02.746 | 99.00th=[28967], 99.50th=[30278], 99.90th=[31589], 99.95th=[31589], 00:10:02.746 | 99.99th=[31589] 00:10:02.746 write: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec); 0 zone resets 00:10:02.746 slat (usec): min=13, max=6151, avg=189.96, stdev=756.22 00:10:02.746 clat (usec): min=17208, max=31323, avg=25326.18, stdev=1972.61 00:10:02.746 lat (usec): min=19555, max=31341, avg=25516.14, stdev=1884.33 00:10:02.746 clat percentiles (usec): 00:10:02.746 | 1.00th=[19792], 5.00th=[22938], 10.00th=[23200], 20.00th=[23725], 00:10:02.746 | 30.00th=[23987], 40.00th=[24511], 50.00th=[25560], 60.00th=[25822], 00:10:02.746 | 70.00th=[26346], 80.00th=[26608], 90.00th=[27395], 95.00th=[28967], 00:10:02.746 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31327], 99.95th=[31327], 00:10:02.746 | 99.99th=[31327] 00:10:02.746 bw ( KiB/s): min= 9667, max=10832, per=16.44%, avg=10249.50, stdev=823.78, samples=2 00:10:02.746 iops : min= 2416, max= 2708, avg=2562.00, stdev=206.48, samples=2 00:10:02.746 lat (usec) : 500=0.02% 00:10:02.746 lat (msec) : 2=0.14%, 10=0.63%, 20=1.84%, 50=97.37% 00:10:02.746 cpu : usr=2.50%, sys=7.59%, ctx=712, majf=0, minf=19 00:10:02.746 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:02.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.746 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:02.746 issued rwts: total=2497,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.746 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:02.746 job2: (groupid=0, jobs=1): err= 0: pid=68851: Wed Apr 17 15:33:03 2024 00:10:02.746 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:10:02.746 slat (usec): min=7, max=4841, avg=101.65, stdev=484.07 00:10:02.746 clat (usec): min=9770, max=16130, avg=13615.01, stdev=723.49 00:10:02.746 lat (usec): min=12321, max=16141, avg=13716.66, stdev=544.89 00:10:02.746 clat percentiles (usec): 00:10:02.746 | 1.00th=[10814], 5.00th=[12518], 10.00th=[12911], 20.00th=[13304], 00:10:02.746 | 30.00th=[13566], 40.00th=[13566], 50.00th=[13698], 60.00th=[13698], 00:10:02.746 | 70.00th=[13829], 80.00th=[13960], 90.00th=[14222], 95.00th=[14484], 00:10:02.746 | 99.00th=[15926], 99.50th=[16057], 99.90th=[16188], 99.95th=[16188], 00:10:02.746 | 99.99th=[16188] 00:10:02.746 write: IOPS=4933, BW=19.3MiB/s (20.2MB/s)(19.3MiB/1002msec); 0 zone resets 00:10:02.746 slat (usec): min=12, max=3588, avg=99.74, stdev=425.72 00:10:02.746 clat (usec): min=1165, max=14538, avg=12904.14, stdev=1201.87 00:10:02.746 lat (usec): min=1186, max=14561, avg=13003.88, stdev=1126.44 00:10:02.746 clat percentiles (usec): 00:10:02.746 | 1.00th=[ 7373], 5.00th=[11076], 10.00th=[12518], 20.00th=[12780], 00:10:02.746 | 30.00th=[12911], 40.00th=[13042], 50.00th=[13042], 60.00th=[13173], 00:10:02.746 | 70.00th=[13304], 80.00th=[13435], 90.00th=[13566], 95.00th=[13698], 00:10:02.746 | 99.00th=[14353], 99.50th=[14484], 99.90th=[14484], 99.95th=[14484], 00:10:02.746 | 99.99th=[14484] 00:10:02.746 bw ( KiB/s): min=20480, max=20480, per=32.85%, avg=20480.00, stdev= 0.00, samples=1 00:10:02.746 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:10:02.746 lat (msec) : 2=0.16%, 4=0.02%, 10=0.71%, 20=99.11% 00:10:02.746 cpu : usr=3.80%, sys=14.19%, ctx=300, majf=0, minf=15 00:10:02.746 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:02.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.746 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:02.746 issued rwts: total=4608,4943,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.746 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:02.746 job3: (groupid=0, jobs=1): err= 0: pid=68852: Wed Apr 17 15:33:03 2024 00:10:02.746 read: IOPS=2521, BW=9.85MiB/s (10.3MB/s)(9.88MiB/1003msec) 00:10:02.746 slat (usec): min=8, max=7418, avg=198.77, stdev=803.21 00:10:02.747 clat (usec): min=999, max=31951, avg=24652.23, stdev=3337.80 00:10:02.747 lat (usec): min=4802, max=31967, avg=24851.00, stdev=3265.68 00:10:02.747 clat percentiles (usec): 00:10:02.747 | 1.00th=[ 5211], 5.00th=[19792], 10.00th=[21890], 20.00th=[24249], 00:10:02.747 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25297], 60.00th=[25297], 00:10:02.747 | 70.00th=[25560], 80.00th=[26084], 90.00th=[26870], 95.00th=[27919], 00:10:02.747 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31851], 99.95th=[31851], 00:10:02.747 | 99.99th=[31851] 00:10:02.747 write: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec); 0 zone resets 00:10:02.747 slat (usec): min=14, max=6497, avg=187.16, stdev=748.05 00:10:02.747 clat (usec): min=16109, max=32360, avg=24950.98, stdev=2396.45 00:10:02.747 lat (usec): min=18775, max=32376, avg=25138.14, stdev=2309.92 00:10:02.747 clat percentiles (usec): 00:10:02.747 | 1.00th=[18744], 5.00th=[20579], 10.00th=[22152], 20.00th=[23200], 00:10:02.747 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24773], 60.00th=[25560], 00:10:02.747 | 70.00th=[26084], 80.00th=[26346], 90.00th=[26870], 95.00th=[30278], 00:10:02.747 | 99.00th=[31589], 99.50th=[31851], 99.90th=[32113], 99.95th=[32113], 00:10:02.747 | 99.99th=[32375] 00:10:02.747 bw ( KiB/s): min= 9413, max=11048, per=16.41%, avg=10230.50, stdev=1156.12, samples=2 00:10:02.747 iops : min= 2353, max= 2762, avg=2557.50, stdev=289.21, samples=2 00:10:02.747 lat (usec) : 1000=0.02% 00:10:02.747 lat (msec) : 10=0.63%, 20=3.24%, 50=96.11% 00:10:02.747 cpu : usr=2.10%, sys=7.98%, ctx=655, majf=0, minf=11 00:10:02.747 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:02.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:02.747 issued rwts: total=2529,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.747 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:02.747 00:10:02.747 Run status group 0 (all jobs): 00:10:02.747 READ: bw=57.5MiB/s (60.3MB/s), 9968KiB/s-20.0MiB/s (10.2MB/s-20.9MB/s), io=57.6MiB (60.4MB), run=1001-1003msec 00:10:02.747 WRITE: bw=60.9MiB/s (63.8MB/s), 9.97MiB/s-21.7MiB/s (10.5MB/s-22.8MB/s), io=61.1MiB (64.0MB), run=1001-1003msec 00:10:02.747 00:10:02.747 Disk stats (read/write): 00:10:02.747 nvme0n1: ios=4658/4610, merge=0/0, ticks=12638/11381, in_queue=24019, util=88.06% 00:10:02.747 nvme0n2: ios=2097/2266, merge=0/0, ticks=12316/12892, in_queue=25208, util=88.79% 00:10:02.747 nvme0n3: ios=4113/4128, merge=0/0, ticks=12677/11815, in_queue=24492, util=89.57% 00:10:02.747 nvme0n4: ios=2048/2310, merge=0/0, ticks=12582/12871, in_queue=25453, util=89.73% 00:10:02.747 15:33:03 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:02.747 [global] 00:10:02.747 thread=1 00:10:02.747 invalidate=1 00:10:02.747 rw=randwrite 00:10:02.747 time_based=1 00:10:02.747 runtime=1 00:10:02.747 ioengine=libaio 00:10:02.747 direct=1 00:10:02.747 bs=4096 00:10:02.747 iodepth=128 00:10:02.747 norandommap=0 00:10:02.747 numjobs=1 00:10:02.747 00:10:02.747 verify_dump=1 00:10:02.747 verify_backlog=512 00:10:02.747 verify_state_save=0 00:10:02.747 do_verify=1 00:10:02.747 verify=crc32c-intel 00:10:02.747 [job0] 00:10:02.747 filename=/dev/nvme0n1 00:10:02.747 [job1] 00:10:02.747 filename=/dev/nvme0n2 00:10:02.747 [job2] 00:10:02.747 filename=/dev/nvme0n3 00:10:02.747 [job3] 00:10:02.747 filename=/dev/nvme0n4 00:10:02.747 Could not set queue depth (nvme0n1) 00:10:02.747 Could not set queue depth (nvme0n2) 00:10:02.747 Could not set queue depth (nvme0n3) 00:10:02.747 Could not set queue depth (nvme0n4) 00:10:02.747 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:02.747 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:02.747 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:02.747 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:02.747 fio-3.35 00:10:02.747 Starting 4 threads 00:10:04.130 00:10:04.130 job0: (groupid=0, jobs=1): err= 0: pid=68905: Wed Apr 17 15:33:05 2024 00:10:04.130 read: IOPS=2165, BW=8661KiB/s (8869kB/s)(8696KiB/1004msec) 00:10:04.130 slat (usec): min=8, max=15595, avg=209.29, stdev=1037.16 00:10:04.130 clat (usec): min=505, max=57715, avg=24618.41, stdev=6415.37 00:10:04.130 lat (usec): min=11699, max=57732, avg=24827.71, stdev=6481.07 00:10:04.130 clat percentiles (usec): 00:10:04.130 | 1.00th=[12125], 5.00th=[16909], 10.00th=[18744], 20.00th=[21365], 00:10:04.130 | 30.00th=[21627], 40.00th=[21890], 50.00th=[22414], 60.00th=[23462], 00:10:04.130 | 70.00th=[25297], 80.00th=[31065], 90.00th=[31851], 95.00th=[33424], 00:10:04.130 | 99.00th=[49021], 99.50th=[53216], 99.90th=[57934], 99.95th=[57934], 00:10:04.130 | 99.99th=[57934] 00:10:04.130 write: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec); 0 zone resets 00:10:04.130 slat (usec): min=13, max=7237, avg=206.36, stdev=858.85 00:10:04.130 clat (usec): min=10354, max=71672, avg=28630.05, stdev=16773.25 00:10:04.130 lat (usec): min=10373, max=71692, avg=28836.41, stdev=16879.31 00:10:04.130 clat percentiles (usec): 00:10:04.130 | 1.00th=[11994], 5.00th=[13435], 10.00th=[13829], 20.00th=[14222], 00:10:04.130 | 30.00th=[14615], 40.00th=[17171], 50.00th=[17957], 60.00th=[25035], 00:10:04.130 | 70.00th=[40633], 80.00th=[46400], 90.00th=[54789], 95.00th=[59507], 00:10:04.130 | 99.00th=[64226], 99.50th=[67634], 99.90th=[71828], 99.95th=[71828], 00:10:04.130 | 99.99th=[71828] 00:10:04.130 bw ( KiB/s): min= 8176, max=12288, per=15.80%, avg=10232.00, stdev=2907.62, samples=2 00:10:04.130 iops : min= 2044, max= 3072, avg=2558.00, stdev=726.91, samples=2 00:10:04.130 lat (usec) : 750=0.02% 00:10:04.130 lat (msec) : 20=35.74%, 50=55.32%, 100=8.91% 00:10:04.130 cpu : usr=2.49%, sys=7.08%, ctx=235, majf=0, minf=11 00:10:04.130 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:10:04.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.130 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:04.131 issued rwts: total=2174,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.131 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:04.131 job1: (groupid=0, jobs=1): err= 0: pid=68906: Wed Apr 17 15:33:05 2024 00:10:04.131 read: IOPS=5372, BW=21.0MiB/s (22.0MB/s)(21.0MiB/1002msec) 00:10:04.131 slat (usec): min=10, max=5859, avg=85.94, stdev=501.62 00:10:04.131 clat (usec): min=1108, max=19305, avg=12084.29, stdev=1420.36 00:10:04.131 lat (usec): min=1901, max=22980, avg=12170.24, stdev=1439.64 00:10:04.131 clat percentiles (usec): 00:10:04.131 | 1.00th=[ 7439], 5.00th=[ 9634], 10.00th=[11207], 20.00th=[11600], 00:10:04.131 | 30.00th=[11863], 40.00th=[11994], 50.00th=[12125], 60.00th=[12387], 00:10:04.131 | 70.00th=[12518], 80.00th=[12649], 90.00th=[12911], 95.00th=[13173], 00:10:04.131 | 99.00th=[18744], 99.50th=[18744], 99.90th=[19268], 99.95th=[19268], 00:10:04.131 | 99.99th=[19268] 00:10:04.131 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:10:04.131 slat (usec): min=6, max=8662, avg=87.08, stdev=471.60 00:10:04.131 clat (usec): min=6015, max=16315, avg=11001.78, stdev=998.57 00:10:04.131 lat (usec): min=8083, max=16342, avg=11088.87, stdev=904.44 00:10:04.131 clat percentiles (usec): 00:10:04.131 | 1.00th=[ 7570], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10421], 00:10:04.131 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10945], 60.00th=[11207], 00:10:04.131 | 70.00th=[11469], 80.00th=[11600], 90.00th=[11863], 95.00th=[11994], 00:10:04.131 | 99.00th=[15533], 99.50th=[15795], 99.90th=[16188], 99.95th=[16319], 00:10:04.131 | 99.99th=[16319] 00:10:04.131 bw ( KiB/s): min=22896, max=22896, per=35.35%, avg=22896.00, stdev= 0.00, samples=1 00:10:04.131 iops : min= 5724, max= 5724, avg=5724.00, stdev= 0.00, samples=1 00:10:04.131 lat (msec) : 2=0.05%, 4=0.05%, 10=6.56%, 20=93.34% 00:10:04.131 cpu : usr=5.49%, sys=17.48%, ctx=237, majf=0, minf=9 00:10:04.131 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:04.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.131 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:04.131 issued rwts: total=5383,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.131 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:04.131 job2: (groupid=0, jobs=1): err= 0: pid=68907: Wed Apr 17 15:33:05 2024 00:10:04.131 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:10:04.131 slat (usec): min=7, max=14125, avg=209.82, stdev=1185.19 00:10:04.131 clat (usec): min=13778, max=55672, avg=26408.15, stdev=9515.29 00:10:04.131 lat (usec): min=16319, max=55688, avg=26617.97, stdev=9525.17 00:10:04.131 clat percentiles (usec): 00:10:04.131 | 1.00th=[15664], 5.00th=[17695], 10.00th=[18482], 20.00th=[19530], 00:10:04.131 | 30.00th=[19792], 40.00th=[20317], 50.00th=[22152], 60.00th=[26346], 00:10:04.131 | 70.00th=[30016], 80.00th=[31065], 90.00th=[39584], 95.00th=[48497], 00:10:04.131 | 99.00th=[55313], 99.50th=[55313], 99.90th=[55837], 99.95th=[55837], 00:10:04.131 | 99.99th=[55837] 00:10:04.131 write: IOPS=2933, BW=11.5MiB/s (12.0MB/s)(11.5MiB/1004msec); 0 zone resets 00:10:04.131 slat (usec): min=13, max=8173, avg=149.52, stdev=739.86 00:10:04.131 clat (usec): min=1018, max=43786, avg=19721.73, stdev=6245.52 00:10:04.131 lat (usec): min=3306, max=43829, avg=19871.25, stdev=6226.88 00:10:04.131 clat percentiles (usec): 00:10:04.131 | 1.00th=[ 4015], 5.00th=[15008], 10.00th=[15664], 20.00th=[15795], 00:10:04.131 | 30.00th=[16057], 40.00th=[16319], 50.00th=[16712], 60.00th=[19792], 00:10:04.131 | 70.00th=[20579], 80.00th=[22414], 90.00th=[27395], 95.00th=[35390], 00:10:04.131 | 99.00th=[43254], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:10:04.131 | 99.99th=[43779] 00:10:04.131 bw ( KiB/s): min= 8192, max=14344, per=17.40%, avg=11268.00, stdev=4350.12, samples=2 00:10:04.131 iops : min= 2048, max= 3586, avg=2817.00, stdev=1087.53, samples=2 00:10:04.131 lat (msec) : 2=0.02%, 4=0.51%, 10=0.07%, 20=48.63%, 50=48.52% 00:10:04.131 lat (msec) : 100=2.25% 00:10:04.131 cpu : usr=3.09%, sys=8.37%, ctx=174, majf=0, minf=15 00:10:04.131 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:10:04.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.131 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:04.131 issued rwts: total=2560,2945,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.131 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:04.131 job3: (groupid=0, jobs=1): err= 0: pid=68908: Wed Apr 17 15:33:05 2024 00:10:04.131 read: IOPS=4788, BW=18.7MiB/s (19.6MB/s)(18.8MiB/1003msec) 00:10:04.131 slat (usec): min=8, max=6548, avg=96.14, stdev=608.74 00:10:04.131 clat (usec): min=1497, max=20748, avg=13353.09, stdev=1613.31 00:10:04.131 lat (usec): min=6053, max=24807, avg=13449.23, stdev=1635.29 00:10:04.131 clat percentiles (usec): 00:10:04.131 | 1.00th=[ 6980], 5.00th=[ 9765], 10.00th=[12518], 20.00th=[13042], 00:10:04.131 | 30.00th=[13173], 40.00th=[13304], 50.00th=[13435], 60.00th=[13698], 00:10:04.131 | 70.00th=[13829], 80.00th=[13960], 90.00th=[14222], 95.00th=[14484], 00:10:04.131 | 99.00th=[20055], 99.50th=[20317], 99.90th=[20841], 99.95th=[20841], 00:10:04.131 | 99.99th=[20841] 00:10:04.131 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:10:04.131 slat (usec): min=11, max=8306, avg=97.75, stdev=572.18 00:10:04.131 clat (usec): min=6385, max=16522, avg=12294.47, stdev=1156.39 00:10:04.131 lat (usec): min=8728, max=16548, avg=12392.21, stdev=1033.09 00:10:04.131 clat percentiles (usec): 00:10:04.131 | 1.00th=[ 8160], 5.00th=[10683], 10.00th=[11076], 20.00th=[11469], 00:10:04.131 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12387], 60.00th=[12649], 00:10:04.131 | 70.00th=[12911], 80.00th=[13042], 90.00th=[13304], 95.00th=[13566], 00:10:04.131 | 99.00th=[16319], 99.50th=[16450], 99.90th=[16450], 99.95th=[16450], 00:10:04.131 | 99.99th=[16581] 00:10:04.131 bw ( KiB/s): min=20480, max=20521, per=31.65%, avg=20500.50, stdev=28.99, samples=2 00:10:04.131 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:10:04.131 lat (msec) : 2=0.01%, 10=3.87%, 20=95.56%, 50=0.56% 00:10:04.131 cpu : usr=4.49%, sys=13.77%, ctx=201, majf=0, minf=12 00:10:04.131 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:04.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.131 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:04.131 issued rwts: total=4803,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.131 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:04.131 00:10:04.131 Run status group 0 (all jobs): 00:10:04.131 READ: bw=58.0MiB/s (60.9MB/s), 8661KiB/s-21.0MiB/s (8869kB/s-22.0MB/s), io=58.3MiB (61.1MB), run=1002-1004msec 00:10:04.131 WRITE: bw=63.2MiB/s (66.3MB/s), 9.96MiB/s-22.0MiB/s (10.4MB/s-23.0MB/s), io=63.5MiB (66.6MB), run=1002-1004msec 00:10:04.131 00:10:04.131 Disk stats (read/write): 00:10:04.131 nvme0n1: ios=2098/2247, merge=0/0, ticks=24662/26368, in_queue=51030, util=88.78% 00:10:04.131 nvme0n2: ios=4657/4800, merge=0/0, ticks=52265/47765, in_queue=100030, util=89.28% 00:10:04.131 nvme0n3: ios=2065/2560, merge=0/0, ticks=14373/11130, in_queue=25503, util=89.70% 00:10:04.131 nvme0n4: ios=4096/4352, merge=0/0, ticks=51959/49335, in_queue=101294, util=89.65% 00:10:04.131 15:33:05 -- target/fio.sh@55 -- # sync 00:10:04.131 15:33:05 -- target/fio.sh@59 -- # fio_pid=68921 00:10:04.131 15:33:05 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:04.131 15:33:05 -- target/fio.sh@61 -- # sleep 3 00:10:04.131 [global] 00:10:04.131 thread=1 00:10:04.131 invalidate=1 00:10:04.131 rw=read 00:10:04.131 time_based=1 00:10:04.131 runtime=10 00:10:04.131 ioengine=libaio 00:10:04.131 direct=1 00:10:04.131 bs=4096 00:10:04.131 iodepth=1 00:10:04.131 norandommap=1 00:10:04.131 numjobs=1 00:10:04.131 00:10:04.131 [job0] 00:10:04.131 filename=/dev/nvme0n1 00:10:04.131 [job1] 00:10:04.131 filename=/dev/nvme0n2 00:10:04.131 [job2] 00:10:04.131 filename=/dev/nvme0n3 00:10:04.131 [job3] 00:10:04.131 filename=/dev/nvme0n4 00:10:04.131 Could not set queue depth (nvme0n1) 00:10:04.131 Could not set queue depth (nvme0n2) 00:10:04.131 Could not set queue depth (nvme0n3) 00:10:04.131 Could not set queue depth (nvme0n4) 00:10:04.131 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:04.131 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:04.131 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:04.131 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:04.131 fio-3.35 00:10:04.131 Starting 4 threads 00:10:07.477 15:33:08 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:07.477 fio: pid=68970, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:07.477 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=40787968, buflen=4096 00:10:07.477 15:33:08 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:07.477 fio: pid=68969, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:07.477 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=57602048, buflen=4096 00:10:07.477 15:33:08 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:07.477 15:33:08 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:07.737 fio: pid=68967, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:07.737 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=69632, buflen=4096 00:10:07.737 15:33:09 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:07.737 15:33:09 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:07.997 fio: pid=68968, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:07.997 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=53682176, buflen=4096 00:10:07.997 00:10:07.997 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68967: Wed Apr 17 15:33:09 2024 00:10:07.997 read: IOPS=4851, BW=18.9MiB/s (19.9MB/s)(64.1MiB/3381msec) 00:10:07.997 slat (usec): min=10, max=15844, avg=17.89, stdev=200.56 00:10:07.997 clat (usec): min=119, max=3662, avg=186.49, stdev=66.61 00:10:07.997 lat (usec): min=138, max=16141, avg=204.38, stdev=212.34 00:10:07.997 clat percentiles (usec): 00:10:07.997 | 1.00th=[ 137], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 155], 00:10:07.997 | 30.00th=[ 161], 40.00th=[ 167], 50.00th=[ 174], 60.00th=[ 182], 00:10:07.997 | 70.00th=[ 194], 80.00th=[ 210], 90.00th=[ 239], 95.00th=[ 265], 00:10:07.997 | 99.00th=[ 318], 99.50th=[ 375], 99.90th=[ 570], 99.95th=[ 1270], 00:10:07.997 | 99.99th=[ 3458] 00:10:07.997 bw ( KiB/s): min=15328, max=21864, per=33.94%, avg=19905.00, stdev=2657.25, samples=6 00:10:07.997 iops : min= 3832, max= 5466, avg=4976.17, stdev=664.28, samples=6 00:10:07.997 lat (usec) : 250=92.54%, 500=7.28%, 750=0.09%, 1000=0.01% 00:10:07.997 lat (msec) : 2=0.04%, 4=0.03% 00:10:07.997 cpu : usr=1.92%, sys=6.36%, ctx=16409, majf=0, minf=1 00:10:07.997 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:07.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.997 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.997 issued rwts: total=16402,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:07.997 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:07.997 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68968: Wed Apr 17 15:33:09 2024 00:10:07.997 read: IOPS=3589, BW=14.0MiB/s (14.7MB/s)(51.2MiB/3651msec) 00:10:07.997 slat (usec): min=7, max=14754, avg=19.08, stdev=221.63 00:10:07.997 clat (usec): min=43, max=7360, avg=257.90, stdev=102.09 00:10:07.997 lat (usec): min=146, max=14982, avg=276.99, stdev=244.55 00:10:07.997 clat percentiles (usec): 00:10:07.997 | 1.00th=[ 147], 5.00th=[ 161], 10.00th=[ 174], 20.00th=[ 217], 00:10:07.997 | 30.00th=[ 245], 40.00th=[ 255], 50.00th=[ 265], 60.00th=[ 273], 00:10:07.997 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 310], 95.00th=[ 330], 00:10:07.997 | 99.00th=[ 371], 99.50th=[ 400], 99.90th=[ 685], 99.95th=[ 1713], 00:10:07.997 | 99.99th=[ 4047] 00:10:07.997 bw ( KiB/s): min=12520, max=17865, per=24.24%, avg=14218.00, stdev=1699.98, samples=7 00:10:07.997 iops : min= 3130, max= 4466, avg=3554.43, stdev=424.93, samples=7 00:10:07.997 lat (usec) : 50=0.01%, 250=34.69%, 500=65.12%, 750=0.08%, 1000=0.02% 00:10:07.997 lat (msec) : 2=0.02%, 4=0.03%, 10=0.02% 00:10:07.997 cpu : usr=1.23%, sys=4.77%, ctx=13118, majf=0, minf=1 00:10:07.997 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:07.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.997 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.997 issued rwts: total=13107,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:07.997 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:07.998 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68969: Wed Apr 17 15:33:09 2024 00:10:07.998 read: IOPS=4486, BW=17.5MiB/s (18.4MB/s)(54.9MiB/3135msec) 00:10:07.998 slat (usec): min=10, max=12791, avg=16.76, stdev=135.22 00:10:07.998 clat (usec): min=136, max=2669, avg=204.52, stdev=59.36 00:10:07.998 lat (usec): min=152, max=13111, avg=221.28, stdev=149.11 00:10:07.998 clat percentiles (usec): 00:10:07.998 | 1.00th=[ 153], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 172], 00:10:07.998 | 30.00th=[ 178], 40.00th=[ 184], 50.00th=[ 190], 60.00th=[ 198], 00:10:07.998 | 70.00th=[ 208], 80.00th=[ 231], 90.00th=[ 273], 95.00th=[ 297], 00:10:07.998 | 99.00th=[ 343], 99.50th=[ 359], 99.90th=[ 445], 99.95th=[ 840], 00:10:07.998 | 99.99th=[ 2147] 00:10:07.998 bw ( KiB/s): min=12592, max=19984, per=30.99%, avg=18173.33, stdev=2842.59, samples=6 00:10:07.998 iops : min= 3148, max= 4996, avg=4543.33, stdev=710.65, samples=6 00:10:07.998 lat (usec) : 250=84.24%, 500=15.68%, 750=0.01%, 1000=0.01% 00:10:07.998 lat (msec) : 2=0.02%, 4=0.03% 00:10:07.998 cpu : usr=1.24%, sys=6.19%, ctx=14074, majf=0, minf=1 00:10:07.998 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:07.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.998 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.998 issued rwts: total=14064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:07.998 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:07.998 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68970: Wed Apr 17 15:33:09 2024 00:10:07.998 read: IOPS=3416, BW=13.3MiB/s (14.0MB/s)(38.9MiB/2915msec) 00:10:07.998 slat (usec): min=7, max=146, avg=12.55, stdev= 5.38 00:10:07.998 clat (usec): min=159, max=1654, avg=278.71, stdev=35.54 00:10:07.998 lat (usec): min=179, max=1664, avg=291.26, stdev=35.38 00:10:07.998 clat percentiles (usec): 00:10:07.998 | 1.00th=[ 223], 5.00th=[ 239], 10.00th=[ 245], 20.00th=[ 255], 00:10:07.998 | 30.00th=[ 262], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 281], 00:10:07.998 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 322], 95.00th=[ 338], 00:10:07.998 | 99.00th=[ 375], 99.50th=[ 396], 99.90th=[ 519], 99.95th=[ 668], 00:10:07.998 | 99.99th=[ 1647] 00:10:07.998 bw ( KiB/s): min=13560, max=14112, per=23.76%, avg=13936.00, stdev=217.70, samples=5 00:10:07.998 iops : min= 3390, max= 3528, avg=3484.00, stdev=54.42, samples=5 00:10:07.998 lat (usec) : 250=15.28%, 500=84.61%, 750=0.09% 00:10:07.998 lat (msec) : 2=0.01% 00:10:07.998 cpu : usr=0.58%, sys=4.15%, ctx=9959, majf=0, minf=1 00:10:07.998 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:07.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.998 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.998 issued rwts: total=9959,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:07.998 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:07.998 00:10:07.998 Run status group 0 (all jobs): 00:10:07.998 READ: bw=57.3MiB/s (60.1MB/s), 13.3MiB/s-18.9MiB/s (14.0MB/s-19.9MB/s), io=209MiB (219MB), run=2915-3651msec 00:10:07.998 00:10:07.998 Disk stats (read/write): 00:10:07.998 nvme0n1: ios=16349/0, merge=0/0, ticks=3082/0, in_queue=3082, util=95.08% 00:10:07.998 nvme0n2: ios=12940/0, merge=0/0, ticks=3306/0, in_queue=3306, util=94.97% 00:10:07.998 nvme0n3: ios=14016/0, merge=0/0, ticks=2929/0, in_queue=2929, util=96.18% 00:10:07.998 nvme0n4: ios=9828/0, merge=0/0, ticks=2624/0, in_queue=2624, util=96.76% 00:10:07.998 15:33:09 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:07.998 15:33:09 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:08.258 15:33:09 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:08.258 15:33:09 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:08.518 15:33:09 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:08.518 15:33:09 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:08.777 15:33:10 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:08.777 15:33:10 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:09.037 15:33:10 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:09.037 15:33:10 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:09.297 15:33:10 -- target/fio.sh@69 -- # fio_status=0 00:10:09.297 15:33:10 -- target/fio.sh@70 -- # wait 68921 00:10:09.297 15:33:10 -- target/fio.sh@70 -- # fio_status=4 00:10:09.297 15:33:10 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:09.297 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.297 15:33:10 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:09.297 15:33:10 -- common/autotest_common.sh@1205 -- # local i=0 00:10:09.297 15:33:10 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:10:09.297 15:33:10 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:09.297 15:33:10 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:10:09.297 15:33:10 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:09.297 nvmf hotplug test: fio failed as expected 00:10:09.297 15:33:10 -- common/autotest_common.sh@1217 -- # return 0 00:10:09.297 15:33:10 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:09.297 15:33:10 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:09.297 15:33:10 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:09.557 15:33:10 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:09.557 15:33:10 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:09.557 15:33:10 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:09.557 15:33:10 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:09.557 15:33:10 -- target/fio.sh@91 -- # nvmftestfini 00:10:09.557 15:33:10 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:09.557 15:33:10 -- nvmf/common.sh@117 -- # sync 00:10:09.557 15:33:10 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:09.557 15:33:10 -- nvmf/common.sh@120 -- # set +e 00:10:09.557 15:33:10 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:09.557 15:33:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:09.557 rmmod nvme_tcp 00:10:09.557 rmmod nvme_fabrics 00:10:09.817 rmmod nvme_keyring 00:10:09.817 15:33:11 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:09.817 15:33:11 -- nvmf/common.sh@124 -- # set -e 00:10:09.817 15:33:11 -- nvmf/common.sh@125 -- # return 0 00:10:09.817 15:33:11 -- nvmf/common.sh@478 -- # '[' -n 68545 ']' 00:10:09.817 15:33:11 -- nvmf/common.sh@479 -- # killprocess 68545 00:10:09.817 15:33:11 -- common/autotest_common.sh@936 -- # '[' -z 68545 ']' 00:10:09.817 15:33:11 -- common/autotest_common.sh@940 -- # kill -0 68545 00:10:09.817 15:33:11 -- common/autotest_common.sh@941 -- # uname 00:10:09.817 15:33:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:09.817 15:33:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68545 00:10:09.817 killing process with pid 68545 00:10:09.817 15:33:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:09.817 15:33:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:09.817 15:33:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68545' 00:10:09.817 15:33:11 -- common/autotest_common.sh@955 -- # kill 68545 00:10:09.817 15:33:11 -- common/autotest_common.sh@960 -- # wait 68545 00:10:10.078 15:33:11 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:10.078 15:33:11 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:10.078 15:33:11 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:10.078 15:33:11 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:10.078 15:33:11 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:10.078 15:33:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.078 15:33:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:10.078 15:33:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.078 15:33:11 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:10.078 ************************************ 00:10:10.078 END TEST nvmf_fio_target 00:10:10.078 ************************************ 00:10:10.078 00:10:10.078 real 0m19.309s 00:10:10.078 user 1m11.916s 00:10:10.078 sys 0m10.504s 00:10:10.078 15:33:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:10.078 15:33:11 -- common/autotest_common.sh@10 -- # set +x 00:10:10.078 15:33:11 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:10.078 15:33:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:10.078 15:33:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:10.078 15:33:11 -- common/autotest_common.sh@10 -- # set +x 00:10:10.338 ************************************ 00:10:10.338 START TEST nvmf_bdevio 00:10:10.338 ************************************ 00:10:10.338 15:33:11 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:10.338 * Looking for test storage... 00:10:10.338 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:10.338 15:33:11 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:10.338 15:33:11 -- nvmf/common.sh@7 -- # uname -s 00:10:10.338 15:33:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:10.338 15:33:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:10.338 15:33:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:10.339 15:33:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:10.339 15:33:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:10.339 15:33:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:10.339 15:33:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:10.339 15:33:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:10.339 15:33:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:10.339 15:33:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:10.339 15:33:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02dfa913-00e4-4a25-ab2c-855f7283d4db 00:10:10.339 15:33:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=02dfa913-00e4-4a25-ab2c-855f7283d4db 00:10:10.339 15:33:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:10.339 15:33:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:10.339 15:33:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:10.339 15:33:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:10.339 15:33:11 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:10.339 15:33:11 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:10.339 15:33:11 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:10.339 15:33:11 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:10.339 15:33:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.339 15:33:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.339 15:33:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.339 15:33:11 -- paths/export.sh@5 -- # export PATH 00:10:10.339 15:33:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.339 15:33:11 -- nvmf/common.sh@47 -- # : 0 00:10:10.339 15:33:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:10.339 15:33:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:10.339 15:33:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:10.339 15:33:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:10.339 15:33:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:10.339 15:33:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:10.339 15:33:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:10.339 15:33:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:10.339 15:33:11 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:10.339 15:33:11 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:10.339 15:33:11 -- target/bdevio.sh@14 -- # nvmftestinit 00:10:10.339 15:33:11 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:10.339 15:33:11 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:10.339 15:33:11 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:10.339 15:33:11 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:10.339 15:33:11 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:10.339 15:33:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.339 15:33:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:10.339 15:33:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.339 15:33:11 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:10:10.339 15:33:11 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:10:10.339 15:33:11 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:10:10.339 15:33:11 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:10:10.339 15:33:11 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:10:10.339 15:33:11 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:10:10.339 15:33:11 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:10.339 15:33:11 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:10.339 15:33:11 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:10.339 15:33:11 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:10.339 15:33:11 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:10.339 15:33:11 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:10.339 15:33:11 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:10.339 15:33:11 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:10.339 15:33:11 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:10.339 15:33:11 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:10.339 15:33:11 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:10.339 15:33:11 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:10.339 15:33:11 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:10.339 15:33:11 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:10.339 Cannot find device "nvmf_tgt_br" 00:10:10.339 15:33:11 -- nvmf/common.sh@155 -- # true 00:10:10.339 15:33:11 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:10.339 Cannot find device "nvmf_tgt_br2" 00:10:10.339 15:33:11 -- nvmf/common.sh@156 -- # true 00:10:10.339 15:33:11 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:10.339 15:33:11 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:10.339 Cannot find device "nvmf_tgt_br" 00:10:10.339 15:33:11 -- nvmf/common.sh@158 -- # true 00:10:10.339 15:33:11 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:10.339 Cannot find device "nvmf_tgt_br2" 00:10:10.339 15:33:11 -- nvmf/common.sh@159 -- # true 00:10:10.339 15:33:11 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:10.339 15:33:11 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:10.339 15:33:11 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:10.600 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:10.600 15:33:11 -- nvmf/common.sh@162 -- # true 00:10:10.600 15:33:11 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:10.600 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:10.600 15:33:11 -- nvmf/common.sh@163 -- # true 00:10:10.600 15:33:11 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:10.600 15:33:11 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:10.600 15:33:11 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:10.600 15:33:11 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:10.600 15:33:11 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:10.600 15:33:11 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:10.600 15:33:11 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:10.600 15:33:11 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:10.600 15:33:11 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:10.600 15:33:11 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:10.600 15:33:11 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:10.600 15:33:11 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:10.600 15:33:11 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:10.600 15:33:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:10.600 15:33:11 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:10.600 15:33:11 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:10.600 15:33:11 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:10.600 15:33:11 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:10.600 15:33:11 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:10.600 15:33:11 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:10.600 15:33:11 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:10.600 15:33:11 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:10.600 15:33:11 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:10.600 15:33:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:10.600 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:10.600 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:10:10.600 00:10:10.600 --- 10.0.0.2 ping statistics --- 00:10:10.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.600 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:10:10.600 15:33:11 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:10.600 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:10.600 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:10:10.600 00:10:10.600 --- 10.0.0.3 ping statistics --- 00:10:10.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.600 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:10:10.600 15:33:11 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:10.600 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:10.600 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:10:10.600 00:10:10.600 --- 10.0.0.1 ping statistics --- 00:10:10.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.600 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:10:10.600 15:33:11 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:10.600 15:33:11 -- nvmf/common.sh@422 -- # return 0 00:10:10.600 15:33:11 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:10.600 15:33:11 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:10.600 15:33:11 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:10.600 15:33:11 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:10.600 15:33:11 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:10.600 15:33:11 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:10.600 15:33:11 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:10.600 15:33:11 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:10.600 15:33:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:10.600 15:33:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:10.600 15:33:11 -- common/autotest_common.sh@10 -- # set +x 00:10:10.600 15:33:11 -- nvmf/common.sh@470 -- # nvmfpid=69239 00:10:10.600 15:33:11 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:10.600 15:33:11 -- nvmf/common.sh@471 -- # waitforlisten 69239 00:10:10.600 15:33:11 -- common/autotest_common.sh@817 -- # '[' -z 69239 ']' 00:10:10.600 15:33:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.600 15:33:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:10.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.600 15:33:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.600 15:33:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:10.600 15:33:11 -- common/autotest_common.sh@10 -- # set +x 00:10:10.600 [2024-04-17 15:33:12.031030] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:10:10.600 [2024-04-17 15:33:12.031296] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:10.860 [2024-04-17 15:33:12.169300] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:10.860 [2024-04-17 15:33:12.293305] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:10.860 [2024-04-17 15:33:12.293676] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:10.860 [2024-04-17 15:33:12.293712] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:10.860 [2024-04-17 15:33:12.293721] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:10.860 [2024-04-17 15:33:12.293728] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:10.860 [2024-04-17 15:33:12.293975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:10:10.860 [2024-04-17 15:33:12.294075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:10:10.860 [2024-04-17 15:33:12.294227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:10:10.860 [2024-04-17 15:33:12.294231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:11.822 15:33:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:11.822 15:33:12 -- common/autotest_common.sh@850 -- # return 0 00:10:11.822 15:33:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:11.822 15:33:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:11.822 15:33:12 -- common/autotest_common.sh@10 -- # set +x 00:10:11.822 15:33:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:11.822 15:33:13 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:11.822 15:33:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:11.822 15:33:13 -- common/autotest_common.sh@10 -- # set +x 00:10:11.822 [2024-04-17 15:33:13.035065] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:11.822 15:33:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:11.822 15:33:13 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:11.822 15:33:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:11.822 15:33:13 -- common/autotest_common.sh@10 -- # set +x 00:10:11.822 Malloc0 00:10:11.822 15:33:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:11.822 15:33:13 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:11.822 15:33:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:11.822 15:33:13 -- common/autotest_common.sh@10 -- # set +x 00:10:11.822 15:33:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:11.822 15:33:13 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:11.822 15:33:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:11.822 15:33:13 -- common/autotest_common.sh@10 -- # set +x 00:10:11.822 15:33:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:11.822 15:33:13 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:11.822 15:33:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:11.822 15:33:13 -- common/autotest_common.sh@10 -- # set +x 00:10:11.822 [2024-04-17 15:33:13.120725] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:11.822 15:33:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:11.822 15:33:13 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:11.822 15:33:13 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:11.822 15:33:13 -- nvmf/common.sh@521 -- # config=() 00:10:11.822 15:33:13 -- nvmf/common.sh@521 -- # local subsystem config 00:10:11.822 15:33:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:10:11.822 15:33:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:10:11.822 { 00:10:11.822 "params": { 00:10:11.822 "name": "Nvme$subsystem", 00:10:11.822 "trtype": "$TEST_TRANSPORT", 00:10:11.822 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:11.822 "adrfam": "ipv4", 00:10:11.822 "trsvcid": "$NVMF_PORT", 00:10:11.822 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:11.822 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:11.822 "hdgst": ${hdgst:-false}, 00:10:11.823 "ddgst": ${ddgst:-false} 00:10:11.823 }, 00:10:11.823 "method": "bdev_nvme_attach_controller" 00:10:11.823 } 00:10:11.823 EOF 00:10:11.823 )") 00:10:11.823 15:33:13 -- nvmf/common.sh@543 -- # cat 00:10:11.823 15:33:13 -- nvmf/common.sh@545 -- # jq . 00:10:11.823 15:33:13 -- nvmf/common.sh@546 -- # IFS=, 00:10:11.823 15:33:13 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:10:11.823 "params": { 00:10:11.823 "name": "Nvme1", 00:10:11.823 "trtype": "tcp", 00:10:11.823 "traddr": "10.0.0.2", 00:10:11.823 "adrfam": "ipv4", 00:10:11.823 "trsvcid": "4420", 00:10:11.823 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:11.823 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:11.823 "hdgst": false, 00:10:11.823 "ddgst": false 00:10:11.823 }, 00:10:11.823 "method": "bdev_nvme_attach_controller" 00:10:11.823 }' 00:10:11.823 [2024-04-17 15:33:13.179713] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:10:11.823 [2024-04-17 15:33:13.179827] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69275 ] 00:10:12.082 [2024-04-17 15:33:13.322566] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:12.082 [2024-04-17 15:33:13.438567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.082 [2024-04-17 15:33:13.438674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:12.082 [2024-04-17 15:33:13.438682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.082 [2024-04-17 15:33:13.448698] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:10:12.082 [2024-04-17 15:33:13.448924] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:10:12.082 [2024-04-17 15:33:13.449060] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: /var/tmp/spdk.sock 00:10:12.342 [2024-04-17 15:33:13.653552] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: /var/tmp/spdk.sock 00:10:12.342 I/O targets: 00:10:12.342 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:12.342 00:10:12.342 00:10:12.342 CUnit - A unit testing framework for C - Version 2.1-3 00:10:12.342 http://cunit.sourceforge.net/ 00:10:12.342 00:10:12.342 00:10:12.342 Suite: bdevio tests on: Nvme1n1 00:10:12.342 Test: blockdev write read block ...passed 00:10:12.342 Test: blockdev write zeroes read block ...passed 00:10:12.342 Test: blockdev write zeroes read no split ...passed 00:10:12.342 Test: blockdev write zeroes read split ...passed 00:10:12.342 Test: blockdev write zeroes read split partial ...passed 00:10:12.342 Test: blockdev reset ...[2024-04-17 15:33:13.689713] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:12.342 [2024-04-17 15:33:13.690114] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5cdb0 (9): Bad file descriptor 00:10:12.342 [2024-04-17 15:33:13.703096] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:12.342 passed 00:10:12.342 Test: blockdev write read 8 blocks ...passed 00:10:12.342 Test: blockdev write read size > 128k ...passed 00:10:12.342 Test: blockdev write read invalid size ...passed 00:10:12.342 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:12.342 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:12.342 Test: blockdev write read max offset ...passed 00:10:12.342 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:12.342 Test: blockdev writev readv 8 blocks ...passed 00:10:12.342 Test: blockdev writev readv 30 x 1block ...passed 00:10:12.342 Test: blockdev writev readv block ...passed 00:10:12.342 Test: blockdev writev readv size > 128k ...passed 00:10:12.342 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:12.342 Test: blockdev comparev and writev ...[2024-04-17 15:33:13.713178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:12.342 [2024-04-17 15:33:13.713221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:12.342 [2024-04-17 15:33:13.713244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:12.342 [2024-04-17 15:33:13.713255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:12.342 [2024-04-17 15:33:13.713535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:12.342 [2024-04-17 15:33:13.713552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:12.342 [2024-04-17 15:33:13.713569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:12.342 [2024-04-17 15:33:13.713579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:12.342 [2024-04-17 15:33:13.713880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:12.342 [2024-04-17 15:33:13.713899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:12.342 [2024-04-17 15:33:13.714181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:12.342 [2024-04-17 15:33:13.714197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:12.342 [2024-04-17 15:33:13.714477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:12.342 [2024-04-17 15:33:13.714500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:12.342 [2024-04-17 15:33:13.714518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:12.342 [2024-04-17 15:33:13.714527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:12.342 passed 00:10:12.342 Test: blockdev nvme passthru rw ...passed 00:10:12.342 Test: blockdev nvme passthru vendor specific ...[2024-04-17 15:33:13.715694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:12.342 [2024-04-17 15:33:13.715730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:12.342 passed 00:10:12.342 Test: blockdev nvme admin passthru ...[2024-04-17 15:33:13.716072] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:12.342 [2024-04-17 15:33:13.716108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:12.342 [2024-04-17 15:33:13.716223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:12.342 [2024-04-17 15:33:13.716238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:12.342 [2024-04-17 15:33:13.716344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:12.342 [2024-04-17 15:33:13.716359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:12.342 passed 00:10:12.342 Test: blockdev copy ...passed 00:10:12.342 00:10:12.342 Run Summary: Type Total Ran Passed Failed Inactive 00:10:12.342 suites 1 1 n/a 0 0 00:10:12.342 tests 23 23 23 0 0 00:10:12.342 asserts 152 152 152 0 n/a 00:10:12.342 00:10:12.342 Elapsed time = 0.149 seconds 00:10:12.600 15:33:14 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:12.600 15:33:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:12.600 15:33:14 -- common/autotest_common.sh@10 -- # set +x 00:10:12.600 15:33:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:12.601 15:33:14 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:12.859 15:33:14 -- target/bdevio.sh@30 -- # nvmftestfini 00:10:12.859 15:33:14 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:12.859 15:33:14 -- nvmf/common.sh@117 -- # sync 00:10:12.859 15:33:14 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:12.859 15:33:14 -- nvmf/common.sh@120 -- # set +e 00:10:12.859 15:33:14 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:12.859 15:33:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:12.859 rmmod nvme_tcp 00:10:12.859 rmmod nvme_fabrics 00:10:12.859 rmmod nvme_keyring 00:10:12.859 15:33:14 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:12.859 15:33:14 -- nvmf/common.sh@124 -- # set -e 00:10:12.859 15:33:14 -- nvmf/common.sh@125 -- # return 0 00:10:12.859 15:33:14 -- nvmf/common.sh@478 -- # '[' -n 69239 ']' 00:10:12.859 15:33:14 -- nvmf/common.sh@479 -- # killprocess 69239 00:10:12.859 15:33:14 -- common/autotest_common.sh@936 -- # '[' -z 69239 ']' 00:10:12.859 15:33:14 -- common/autotest_common.sh@940 -- # kill -0 69239 00:10:12.859 15:33:14 -- common/autotest_common.sh@941 -- # uname 00:10:12.859 15:33:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:12.859 15:33:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69239 00:10:12.859 15:33:14 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:10:12.859 15:33:14 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:10:12.859 killing process with pid 69239 00:10:12.859 15:33:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69239' 00:10:12.859 15:33:14 -- common/autotest_common.sh@955 -- # kill 69239 00:10:12.859 15:33:14 -- common/autotest_common.sh@960 -- # wait 69239 00:10:13.118 15:33:14 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:13.118 15:33:14 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:13.118 15:33:14 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:13.118 15:33:14 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:13.118 15:33:14 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:13.118 15:33:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.118 15:33:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:13.118 15:33:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.376 15:33:14 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:13.376 00:10:13.376 real 0m3.042s 00:10:13.376 user 0m10.231s 00:10:13.376 sys 0m0.846s 00:10:13.376 15:33:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:13.376 ************************************ 00:10:13.376 END TEST nvmf_bdevio 00:10:13.376 15:33:14 -- common/autotest_common.sh@10 -- # set +x 00:10:13.376 ************************************ 00:10:13.376 15:33:14 -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:10:13.376 15:33:14 -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:10:13.376 15:33:14 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:13.376 15:33:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:13.376 15:33:14 -- common/autotest_common.sh@10 -- # set +x 00:10:13.376 ************************************ 00:10:13.376 START TEST nvmf_bdevio_no_huge 00:10:13.376 ************************************ 00:10:13.376 15:33:14 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:10:13.376 * Looking for test storage... 00:10:13.376 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:13.376 15:33:14 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:13.376 15:33:14 -- nvmf/common.sh@7 -- # uname -s 00:10:13.376 15:33:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:13.376 15:33:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:13.376 15:33:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:13.376 15:33:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:13.376 15:33:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:13.376 15:33:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:13.376 15:33:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:13.376 15:33:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:13.376 15:33:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:13.376 15:33:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:13.376 15:33:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02dfa913-00e4-4a25-ab2c-855f7283d4db 00:10:13.376 15:33:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=02dfa913-00e4-4a25-ab2c-855f7283d4db 00:10:13.376 15:33:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:13.376 15:33:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:13.376 15:33:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:13.376 15:33:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:13.376 15:33:14 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:13.376 15:33:14 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:13.376 15:33:14 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:13.376 15:33:14 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:13.377 15:33:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.377 15:33:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.377 15:33:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.377 15:33:14 -- paths/export.sh@5 -- # export PATH 00:10:13.377 15:33:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.377 15:33:14 -- nvmf/common.sh@47 -- # : 0 00:10:13.377 15:33:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:13.377 15:33:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:13.377 15:33:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:13.377 15:33:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:13.377 15:33:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:13.377 15:33:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:13.377 15:33:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:13.377 15:33:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:13.377 15:33:14 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:13.377 15:33:14 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:13.377 15:33:14 -- target/bdevio.sh@14 -- # nvmftestinit 00:10:13.377 15:33:14 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:13.377 15:33:14 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:13.377 15:33:14 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:13.377 15:33:14 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:13.377 15:33:14 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:13.377 15:33:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.377 15:33:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:13.377 15:33:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.377 15:33:14 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:10:13.377 15:33:14 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:10:13.377 15:33:14 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:10:13.377 15:33:14 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:10:13.377 15:33:14 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:10:13.377 15:33:14 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:10:13.377 15:33:14 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:13.377 15:33:14 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:13.377 15:33:14 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:13.377 15:33:14 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:13.377 15:33:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:13.377 15:33:14 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:13.377 15:33:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:13.377 15:33:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:13.377 15:33:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:13.377 15:33:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:13.377 15:33:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:13.377 15:33:14 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:13.377 15:33:14 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:13.377 15:33:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:13.377 Cannot find device "nvmf_tgt_br" 00:10:13.377 15:33:14 -- nvmf/common.sh@155 -- # true 00:10:13.377 15:33:14 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:13.636 Cannot find device "nvmf_tgt_br2" 00:10:13.636 15:33:14 -- nvmf/common.sh@156 -- # true 00:10:13.636 15:33:14 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:13.636 15:33:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:13.636 Cannot find device "nvmf_tgt_br" 00:10:13.636 15:33:14 -- nvmf/common.sh@158 -- # true 00:10:13.636 15:33:14 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:13.636 Cannot find device "nvmf_tgt_br2" 00:10:13.636 15:33:14 -- nvmf/common.sh@159 -- # true 00:10:13.636 15:33:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:13.636 15:33:14 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:13.636 15:33:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:13.636 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:13.636 15:33:14 -- nvmf/common.sh@162 -- # true 00:10:13.636 15:33:14 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:13.636 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:13.636 15:33:14 -- nvmf/common.sh@163 -- # true 00:10:13.636 15:33:14 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:13.636 15:33:14 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:13.636 15:33:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:13.636 15:33:14 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:13.636 15:33:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:13.636 15:33:14 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:13.636 15:33:14 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:13.636 15:33:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:13.636 15:33:14 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:13.636 15:33:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:13.636 15:33:14 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:13.636 15:33:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:13.636 15:33:15 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:13.636 15:33:15 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:13.636 15:33:15 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:13.636 15:33:15 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:13.636 15:33:15 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:13.636 15:33:15 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:13.636 15:33:15 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:13.636 15:33:15 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:13.636 15:33:15 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:13.636 15:33:15 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:13.636 15:33:15 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:13.636 15:33:15 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:13.894 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:13.895 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:10:13.895 00:10:13.895 --- 10.0.0.2 ping statistics --- 00:10:13.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.895 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:10:13.895 15:33:15 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:13.895 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:13.895 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:10:13.895 00:10:13.895 --- 10.0.0.3 ping statistics --- 00:10:13.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.895 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:10:13.895 15:33:15 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:13.895 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:13.895 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:10:13.895 00:10:13.895 --- 10.0.0.1 ping statistics --- 00:10:13.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.895 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:10:13.895 15:33:15 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:13.895 15:33:15 -- nvmf/common.sh@422 -- # return 0 00:10:13.895 15:33:15 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:13.895 15:33:15 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:13.895 15:33:15 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:13.895 15:33:15 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:13.895 15:33:15 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:13.895 15:33:15 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:13.895 15:33:15 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:13.895 15:33:15 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:13.895 15:33:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:13.895 15:33:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:13.895 15:33:15 -- common/autotest_common.sh@10 -- # set +x 00:10:13.895 15:33:15 -- nvmf/common.sh@470 -- # nvmfpid=69462 00:10:13.895 15:33:15 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:10:13.895 15:33:15 -- nvmf/common.sh@471 -- # waitforlisten 69462 00:10:13.895 15:33:15 -- common/autotest_common.sh@817 -- # '[' -z 69462 ']' 00:10:13.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.895 15:33:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.895 15:33:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:13.895 15:33:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.895 15:33:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:13.895 15:33:15 -- common/autotest_common.sh@10 -- # set +x 00:10:13.895 [2024-04-17 15:33:15.164325] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:10:13.895 [2024-04-17 15:33:15.164415] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:10:13.895 [2024-04-17 15:33:15.305928] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:14.153 [2024-04-17 15:33:15.432528] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:14.153 [2024-04-17 15:33:15.432944] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:14.153 [2024-04-17 15:33:15.433080] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:14.153 [2024-04-17 15:33:15.433204] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:14.153 [2024-04-17 15:33:15.433276] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:14.154 [2024-04-17 15:33:15.433498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:10:14.154 [2024-04-17 15:33:15.433598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:10:14.154 [2024-04-17 15:33:15.433697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:10:14.154 [2024-04-17 15:33:15.433699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:14.721 15:33:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:14.721 15:33:16 -- common/autotest_common.sh@850 -- # return 0 00:10:14.721 15:33:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:14.980 15:33:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:14.980 15:33:16 -- common/autotest_common.sh@10 -- # set +x 00:10:14.980 15:33:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:14.980 15:33:16 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:14.980 15:33:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:14.980 15:33:16 -- common/autotest_common.sh@10 -- # set +x 00:10:14.980 [2024-04-17 15:33:16.203580] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:14.980 15:33:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:14.980 15:33:16 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:14.980 15:33:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:14.980 15:33:16 -- common/autotest_common.sh@10 -- # set +x 00:10:14.980 Malloc0 00:10:14.980 15:33:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:14.981 15:33:16 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:14.981 15:33:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:14.981 15:33:16 -- common/autotest_common.sh@10 -- # set +x 00:10:14.981 15:33:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:14.981 15:33:16 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:14.981 15:33:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:14.981 15:33:16 -- common/autotest_common.sh@10 -- # set +x 00:10:14.981 15:33:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:14.981 15:33:16 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:14.981 15:33:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:14.981 15:33:16 -- common/autotest_common.sh@10 -- # set +x 00:10:14.981 [2024-04-17 15:33:16.248697] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:14.981 15:33:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:14.981 15:33:16 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:10:14.981 15:33:16 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:14.981 15:33:16 -- nvmf/common.sh@521 -- # config=() 00:10:14.981 15:33:16 -- nvmf/common.sh@521 -- # local subsystem config 00:10:14.981 15:33:16 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:10:14.981 15:33:16 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:10:14.981 { 00:10:14.981 "params": { 00:10:14.981 "name": "Nvme$subsystem", 00:10:14.981 "trtype": "$TEST_TRANSPORT", 00:10:14.981 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:14.981 "adrfam": "ipv4", 00:10:14.981 "trsvcid": "$NVMF_PORT", 00:10:14.981 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:14.981 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:14.981 "hdgst": ${hdgst:-false}, 00:10:14.981 "ddgst": ${ddgst:-false} 00:10:14.981 }, 00:10:14.981 "method": "bdev_nvme_attach_controller" 00:10:14.981 } 00:10:14.981 EOF 00:10:14.981 )") 00:10:14.981 15:33:16 -- nvmf/common.sh@543 -- # cat 00:10:14.981 15:33:16 -- nvmf/common.sh@545 -- # jq . 00:10:14.981 15:33:16 -- nvmf/common.sh@546 -- # IFS=, 00:10:14.981 15:33:16 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:10:14.981 "params": { 00:10:14.981 "name": "Nvme1", 00:10:14.981 "trtype": "tcp", 00:10:14.981 "traddr": "10.0.0.2", 00:10:14.981 "adrfam": "ipv4", 00:10:14.981 "trsvcid": "4420", 00:10:14.981 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:14.981 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:14.981 "hdgst": false, 00:10:14.981 "ddgst": false 00:10:14.981 }, 00:10:14.981 "method": "bdev_nvme_attach_controller" 00:10:14.981 }' 00:10:14.981 [2024-04-17 15:33:16.309606] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:10:14.981 [2024-04-17 15:33:16.309693] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid69498 ] 00:10:15.239 [2024-04-17 15:33:16.459553] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:15.239 [2024-04-17 15:33:16.615426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.240 [2024-04-17 15:33:16.615561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:15.240 [2024-04-17 15:33:16.615568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.240 [2024-04-17 15:33:16.625645] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:10:15.240 [2024-04-17 15:33:16.625876] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:10:15.240 [2024-04-17 15:33:16.625899] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: /var/tmp/spdk.sock 00:10:15.498 [2024-04-17 15:33:16.800197] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: /var/tmp/spdk.sock 00:10:15.498 I/O targets: 00:10:15.498 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:15.498 00:10:15.498 00:10:15.498 CUnit - A unit testing framework for C - Version 2.1-3 00:10:15.498 http://cunit.sourceforge.net/ 00:10:15.498 00:10:15.498 00:10:15.498 Suite: bdevio tests on: Nvme1n1 00:10:15.498 Test: blockdev write read block ...passed 00:10:15.498 Test: blockdev write zeroes read block ...passed 00:10:15.498 Test: blockdev write zeroes read no split ...passed 00:10:15.498 Test: blockdev write zeroes read split ...passed 00:10:15.498 Test: blockdev write zeroes read split partial ...passed 00:10:15.499 Test: blockdev reset ...[2024-04-17 15:33:16.843779] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:15.499 [2024-04-17 15:33:16.843899] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc31b0 (9): Bad file descriptor 00:10:15.499 [2024-04-17 15:33:16.858792] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:15.499 passed 00:10:15.499 Test: blockdev write read 8 blocks ...passed 00:10:15.499 Test: blockdev write read size > 128k ...passed 00:10:15.499 Test: blockdev write read invalid size ...passed 00:10:15.499 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:15.499 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:15.499 Test: blockdev write read max offset ...passed 00:10:15.499 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:15.499 Test: blockdev writev readv 8 blocks ...passed 00:10:15.499 Test: blockdev writev readv 30 x 1block ...passed 00:10:15.499 Test: blockdev writev readv block ...passed 00:10:15.499 Test: blockdev writev readv size > 128k ...passed 00:10:15.499 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:15.499 Test: blockdev comparev and writev ...[2024-04-17 15:33:16.869295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:15.499 [2024-04-17 15:33:16.869339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:15.499 [2024-04-17 15:33:16.869361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:15.499 [2024-04-17 15:33:16.869373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:15.499 [2024-04-17 15:33:16.869663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:15.499 [2024-04-17 15:33:16.869681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:15.499 [2024-04-17 15:33:16.869698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:15.499 [2024-04-17 15:33:16.869708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:15.499 [2024-04-17 15:33:16.870021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:15.499 [2024-04-17 15:33:16.870044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:15.499 [2024-04-17 15:33:16.870062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:15.499 [2024-04-17 15:33:16.870072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:15.499 [2024-04-17 15:33:16.870432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:15.499 [2024-04-17 15:33:16.870462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:15.499 [2024-04-17 15:33:16.870481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:15.499 [2024-04-17 15:33:16.870490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:15.499 passed 00:10:15.499 Test: blockdev nvme passthru rw ...passed 00:10:15.499 Test: blockdev nvme passthru vendor specific ...[2024-04-17 15:33:16.871644] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:15.499 [2024-04-17 15:33:16.871678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:15.499 [2024-04-17 15:33:16.871819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:15.499 [2024-04-17 15:33:16.871836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:15.499 [2024-04-17 15:33:16.872242] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:15.499 [2024-04-17 15:33:16.872276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:15.499 passed 00:10:15.499 Test: blockdev nvme admin passthru ...[2024-04-17 15:33:16.872489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:15.499 [2024-04-17 15:33:16.872512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:15.499 passed 00:10:15.499 Test: blockdev copy ...passed 00:10:15.499 00:10:15.499 Run Summary: Type Total Ran Passed Failed Inactive 00:10:15.499 suites 1 1 n/a 0 0 00:10:15.499 tests 23 23 23 0 0 00:10:15.499 asserts 152 152 152 0 n/a 00:10:15.499 00:10:15.499 Elapsed time = 0.168 seconds 00:10:16.066 15:33:17 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:16.066 15:33:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:16.066 15:33:17 -- common/autotest_common.sh@10 -- # set +x 00:10:16.066 15:33:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:16.066 15:33:17 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:16.066 15:33:17 -- target/bdevio.sh@30 -- # nvmftestfini 00:10:16.066 15:33:17 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:16.066 15:33:17 -- nvmf/common.sh@117 -- # sync 00:10:16.066 15:33:17 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:16.066 15:33:17 -- nvmf/common.sh@120 -- # set +e 00:10:16.066 15:33:17 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:16.066 15:33:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:16.066 rmmod nvme_tcp 00:10:16.066 rmmod nvme_fabrics 00:10:16.066 rmmod nvme_keyring 00:10:16.066 15:33:17 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:16.066 15:33:17 -- nvmf/common.sh@124 -- # set -e 00:10:16.066 15:33:17 -- nvmf/common.sh@125 -- # return 0 00:10:16.066 15:33:17 -- nvmf/common.sh@478 -- # '[' -n 69462 ']' 00:10:16.066 15:33:17 -- nvmf/common.sh@479 -- # killprocess 69462 00:10:16.066 15:33:17 -- common/autotest_common.sh@936 -- # '[' -z 69462 ']' 00:10:16.066 15:33:17 -- common/autotest_common.sh@940 -- # kill -0 69462 00:10:16.066 15:33:17 -- common/autotest_common.sh@941 -- # uname 00:10:16.066 15:33:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:16.066 15:33:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69462 00:10:16.066 killing process with pid 69462 00:10:16.066 15:33:17 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:10:16.066 15:33:17 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:10:16.066 15:33:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69462' 00:10:16.066 15:33:17 -- common/autotest_common.sh@955 -- # kill 69462 00:10:16.066 15:33:17 -- common/autotest_common.sh@960 -- # wait 69462 00:10:16.723 15:33:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:16.723 15:33:17 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:16.723 15:33:17 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:16.723 15:33:17 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:16.723 15:33:17 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:16.723 15:33:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.723 15:33:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:16.723 15:33:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:16.723 15:33:17 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:16.723 ************************************ 00:10:16.723 END TEST nvmf_bdevio_no_huge 00:10:16.723 ************************************ 00:10:16.723 00:10:16.723 real 0m3.265s 00:10:16.723 user 0m11.018s 00:10:16.723 sys 0m1.245s 00:10:16.723 15:33:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:16.723 15:33:17 -- common/autotest_common.sh@10 -- # set +x 00:10:16.723 15:33:17 -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:10:16.723 15:33:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:16.723 15:33:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:16.723 15:33:17 -- common/autotest_common.sh@10 -- # set +x 00:10:16.723 ************************************ 00:10:16.723 START TEST nvmf_tls 00:10:16.723 ************************************ 00:10:16.723 15:33:18 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:10:16.723 * Looking for test storage... 00:10:16.723 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:16.723 15:33:18 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:16.723 15:33:18 -- nvmf/common.sh@7 -- # uname -s 00:10:16.723 15:33:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:16.723 15:33:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:16.723 15:33:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:16.723 15:33:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:16.723 15:33:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:16.723 15:33:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:16.723 15:33:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:16.723 15:33:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:16.723 15:33:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:16.723 15:33:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:16.723 15:33:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02dfa913-00e4-4a25-ab2c-855f7283d4db 00:10:16.723 15:33:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=02dfa913-00e4-4a25-ab2c-855f7283d4db 00:10:16.723 15:33:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:16.724 15:33:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:16.724 15:33:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:16.724 15:33:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:16.724 15:33:18 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:16.724 15:33:18 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:16.724 15:33:18 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:16.724 15:33:18 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:16.724 15:33:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.724 15:33:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.724 15:33:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.724 15:33:18 -- paths/export.sh@5 -- # export PATH 00:10:16.724 15:33:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.724 15:33:18 -- nvmf/common.sh@47 -- # : 0 00:10:16.724 15:33:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:16.724 15:33:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:16.724 15:33:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:16.724 15:33:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:16.724 15:33:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:16.724 15:33:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:16.724 15:33:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:16.724 15:33:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:16.724 15:33:18 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:16.724 15:33:18 -- target/tls.sh@62 -- # nvmftestinit 00:10:16.724 15:33:18 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:16.724 15:33:18 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:16.724 15:33:18 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:16.724 15:33:18 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:16.724 15:33:18 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:16.724 15:33:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.724 15:33:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:16.724 15:33:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:16.983 15:33:18 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:10:16.983 15:33:18 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:10:16.983 15:33:18 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:10:16.983 15:33:18 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:10:16.983 15:33:18 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:10:16.983 15:33:18 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:10:16.983 15:33:18 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:16.983 15:33:18 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:16.983 15:33:18 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:16.983 15:33:18 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:16.983 15:33:18 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:16.983 15:33:18 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:16.983 15:33:18 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:16.983 15:33:18 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:16.983 15:33:18 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:16.983 15:33:18 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:16.983 15:33:18 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:16.983 15:33:18 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:16.983 15:33:18 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:16.983 15:33:18 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:16.983 Cannot find device "nvmf_tgt_br" 00:10:16.983 15:33:18 -- nvmf/common.sh@155 -- # true 00:10:16.983 15:33:18 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:16.983 Cannot find device "nvmf_tgt_br2" 00:10:16.983 15:33:18 -- nvmf/common.sh@156 -- # true 00:10:16.983 15:33:18 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:16.983 15:33:18 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:16.983 Cannot find device "nvmf_tgt_br" 00:10:16.983 15:33:18 -- nvmf/common.sh@158 -- # true 00:10:16.983 15:33:18 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:16.983 Cannot find device "nvmf_tgt_br2" 00:10:16.983 15:33:18 -- nvmf/common.sh@159 -- # true 00:10:16.983 15:33:18 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:16.983 15:33:18 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:16.983 15:33:18 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:16.983 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:16.983 15:33:18 -- nvmf/common.sh@162 -- # true 00:10:16.983 15:33:18 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:16.983 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:16.983 15:33:18 -- nvmf/common.sh@163 -- # true 00:10:16.983 15:33:18 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:16.983 15:33:18 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:16.983 15:33:18 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:16.983 15:33:18 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:16.983 15:33:18 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:16.983 15:33:18 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:16.983 15:33:18 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:16.983 15:33:18 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:16.983 15:33:18 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:16.983 15:33:18 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:16.983 15:33:18 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:16.983 15:33:18 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:16.983 15:33:18 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:16.983 15:33:18 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:16.983 15:33:18 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:16.983 15:33:18 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:16.983 15:33:18 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:16.983 15:33:18 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:16.983 15:33:18 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:17.242 15:33:18 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:17.242 15:33:18 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:17.242 15:33:18 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:17.242 15:33:18 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:17.242 15:33:18 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:17.242 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:17.242 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:10:17.242 00:10:17.242 --- 10.0.0.2 ping statistics --- 00:10:17.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.242 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:10:17.242 15:33:18 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:17.242 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:17.242 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:10:17.242 00:10:17.242 --- 10.0.0.3 ping statistics --- 00:10:17.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.242 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:10:17.242 15:33:18 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:17.242 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:17.242 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:10:17.242 00:10:17.242 --- 10.0.0.1 ping statistics --- 00:10:17.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.242 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:10:17.242 15:33:18 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:17.242 15:33:18 -- nvmf/common.sh@422 -- # return 0 00:10:17.242 15:33:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:17.242 15:33:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:17.242 15:33:18 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:17.242 15:33:18 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:17.242 15:33:18 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:17.242 15:33:18 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:17.242 15:33:18 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:17.242 15:33:18 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:10:17.242 15:33:18 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:17.242 15:33:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:17.242 15:33:18 -- common/autotest_common.sh@10 -- # set +x 00:10:17.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.242 15:33:18 -- nvmf/common.sh@470 -- # nvmfpid=69683 00:10:17.242 15:33:18 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:10:17.242 15:33:18 -- nvmf/common.sh@471 -- # waitforlisten 69683 00:10:17.242 15:33:18 -- common/autotest_common.sh@817 -- # '[' -z 69683 ']' 00:10:17.242 15:33:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.242 15:33:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:17.242 15:33:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.242 15:33:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:17.242 15:33:18 -- common/autotest_common.sh@10 -- # set +x 00:10:17.242 [2024-04-17 15:33:18.566907] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:10:17.242 [2024-04-17 15:33:18.567015] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:17.501 [2024-04-17 15:33:18.704357] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.501 [2024-04-17 15:33:18.854931] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:17.501 [2024-04-17 15:33:18.855010] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:17.501 [2024-04-17 15:33:18.855032] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:17.501 [2024-04-17 15:33:18.855044] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:17.501 [2024-04-17 15:33:18.855053] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:17.501 [2024-04-17 15:33:18.855087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:18.068 15:33:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:18.068 15:33:19 -- common/autotest_common.sh@850 -- # return 0 00:10:18.068 15:33:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:18.068 15:33:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:18.068 15:33:19 -- common/autotest_common.sh@10 -- # set +x 00:10:18.068 15:33:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:18.068 15:33:19 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:10:18.068 15:33:19 -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:10:18.326 true 00:10:18.326 15:33:19 -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:18.326 15:33:19 -- target/tls.sh@73 -- # jq -r .tls_version 00:10:18.584 15:33:19 -- target/tls.sh@73 -- # version=0 00:10:18.584 15:33:19 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:10:18.584 15:33:19 -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:10:18.843 15:33:20 -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:18.843 15:33:20 -- target/tls.sh@81 -- # jq -r .tls_version 00:10:19.102 15:33:20 -- target/tls.sh@81 -- # version=13 00:10:19.102 15:33:20 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:10:19.102 15:33:20 -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:10:19.361 15:33:20 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:19.361 15:33:20 -- target/tls.sh@89 -- # jq -r .tls_version 00:10:19.620 15:33:21 -- target/tls.sh@89 -- # version=7 00:10:19.620 15:33:21 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:10:19.620 15:33:21 -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:19.620 15:33:21 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:10:19.879 15:33:21 -- target/tls.sh@96 -- # ktls=false 00:10:19.879 15:33:21 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:10:19.879 15:33:21 -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:10:20.138 15:33:21 -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:20.138 15:33:21 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:10:20.397 15:33:21 -- target/tls.sh@104 -- # ktls=true 00:10:20.397 15:33:21 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:10:20.397 15:33:21 -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:10:20.655 15:33:21 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:10:20.655 15:33:21 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:20.914 15:33:22 -- target/tls.sh@112 -- # ktls=false 00:10:20.914 15:33:22 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:10:20.914 15:33:22 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:10:20.914 15:33:22 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:10:20.914 15:33:22 -- nvmf/common.sh@691 -- # local prefix key digest 00:10:20.914 15:33:22 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:10:20.914 15:33:22 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:10:20.914 15:33:22 -- nvmf/common.sh@693 -- # digest=1 00:10:20.914 15:33:22 -- nvmf/common.sh@694 -- # python - 00:10:20.914 15:33:22 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:10:20.914 15:33:22 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:10:20.914 15:33:22 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:10:20.914 15:33:22 -- nvmf/common.sh@691 -- # local prefix key digest 00:10:20.914 15:33:22 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:10:20.914 15:33:22 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:10:20.914 15:33:22 -- nvmf/common.sh@693 -- # digest=1 00:10:20.914 15:33:22 -- nvmf/common.sh@694 -- # python - 00:10:20.914 15:33:22 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:10:20.914 15:33:22 -- target/tls.sh@121 -- # mktemp 00:10:20.914 15:33:22 -- target/tls.sh@121 -- # key_path=/tmp/tmp.ytVAlZ6ezA 00:10:20.914 15:33:22 -- target/tls.sh@122 -- # mktemp 00:10:20.914 15:33:22 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.6xo0MWX1dF 00:10:20.914 15:33:22 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:10:20.914 15:33:22 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:10:20.914 15:33:22 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.ytVAlZ6ezA 00:10:20.914 15:33:22 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.6xo0MWX1dF 00:10:20.914 15:33:22 -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:10:21.482 15:33:22 -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:10:21.745 15:33:22 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.ytVAlZ6ezA 00:10:21.745 15:33:22 -- target/tls.sh@49 -- # local key=/tmp/tmp.ytVAlZ6ezA 00:10:21.745 15:33:22 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:10:21.745 [2024-04-17 15:33:23.174216] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:22.009 15:33:23 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:10:22.009 15:33:23 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:10:22.268 [2024-04-17 15:33:23.594317] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:10:22.268 [2024-04-17 15:33:23.594552] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:22.268 15:33:23 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:10:22.528 malloc0 00:10:22.528 15:33:23 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:22.787 15:33:24 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ytVAlZ6ezA 00:10:23.055 [2024-04-17 15:33:24.310628] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:10:23.055 15:33:24 -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.ytVAlZ6ezA 00:10:35.301 Initializing NVMe Controllers 00:10:35.301 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:35.301 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:35.301 Initialization complete. Launching workers. 00:10:35.301 ======================================================== 00:10:35.301 Latency(us) 00:10:35.301 Device Information : IOPS MiB/s Average min max 00:10:35.301 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9818.76 38.35 6519.85 1304.04 8291.25 00:10:35.301 ======================================================== 00:10:35.301 Total : 9818.76 38.35 6519.85 1304.04 8291.25 00:10:35.301 00:10:35.301 15:33:34 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ytVAlZ6ezA 00:10:35.301 15:33:34 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:10:35.301 15:33:34 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:10:35.301 15:33:34 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:10:35.301 15:33:34 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ytVAlZ6ezA' 00:10:35.301 15:33:34 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:35.301 15:33:34 -- target/tls.sh@28 -- # bdevperf_pid=69915 00:10:35.301 15:33:34 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:10:35.301 15:33:34 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:35.301 15:33:34 -- target/tls.sh@31 -- # waitforlisten 69915 /var/tmp/bdevperf.sock 00:10:35.301 15:33:34 -- common/autotest_common.sh@817 -- # '[' -z 69915 ']' 00:10:35.301 15:33:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:35.301 15:33:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:35.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:35.301 15:33:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:35.301 15:33:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:35.301 15:33:34 -- common/autotest_common.sh@10 -- # set +x 00:10:35.301 [2024-04-17 15:33:34.579467] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:10:35.301 [2024-04-17 15:33:34.579599] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69915 ] 00:10:35.301 [2024-04-17 15:33:34.718024] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.301 [2024-04-17 15:33:34.856997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:35.301 15:33:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:35.301 15:33:35 -- common/autotest_common.sh@850 -- # return 0 00:10:35.301 15:33:35 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ytVAlZ6ezA 00:10:35.301 [2024-04-17 15:33:35.698135] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:10:35.301 [2024-04-17 15:33:35.698919] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:10:35.301 TLSTESTn1 00:10:35.301 15:33:35 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:10:35.301 Running I/O for 10 seconds... 00:10:45.283 00:10:45.283 Latency(us) 00:10:45.283 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:45.283 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:10:45.283 Verification LBA range: start 0x0 length 0x2000 00:10:45.283 TLSTESTn1 : 10.02 4111.99 16.06 0.00 0.00 31068.36 9949.56 22401.40 00:10:45.283 =================================================================================================================== 00:10:45.283 Total : 4111.99 16.06 0.00 0.00 31068.36 9949.56 22401.40 00:10:45.283 0 00:10:45.283 15:33:45 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:45.283 15:33:45 -- target/tls.sh@45 -- # killprocess 69915 00:10:45.283 15:33:45 -- common/autotest_common.sh@936 -- # '[' -z 69915 ']' 00:10:45.283 15:33:45 -- common/autotest_common.sh@940 -- # kill -0 69915 00:10:45.283 15:33:45 -- common/autotest_common.sh@941 -- # uname 00:10:45.283 15:33:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:45.283 15:33:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69915 00:10:45.283 killing process with pid 69915 00:10:45.283 15:33:45 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:10:45.283 15:33:45 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:10:45.283 15:33:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69915' 00:10:45.283 15:33:45 -- common/autotest_common.sh@955 -- # kill 69915 00:10:45.283 Received shutdown signal, test time was about 10.000000 seconds 00:10:45.283 00:10:45.283 Latency(us) 00:10:45.283 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:45.283 =================================================================================================================== 00:10:45.283 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:45.283 15:33:45 -- common/autotest_common.sh@960 -- # wait 69915 00:10:45.283 [2024-04-17 15:33:45.977699] app.c: 930:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:10:45.283 15:33:46 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6xo0MWX1dF 00:10:45.283 15:33:46 -- common/autotest_common.sh@638 -- # local es=0 00:10:45.283 15:33:46 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6xo0MWX1dF 00:10:45.283 15:33:46 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:10:45.283 15:33:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:45.283 15:33:46 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:10:45.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:45.283 15:33:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:45.283 15:33:46 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6xo0MWX1dF 00:10:45.283 15:33:46 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:10:45.283 15:33:46 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:10:45.283 15:33:46 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:10:45.283 15:33:46 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.6xo0MWX1dF' 00:10:45.283 15:33:46 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:45.283 15:33:46 -- target/tls.sh@28 -- # bdevperf_pid=70054 00:10:45.283 15:33:46 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:45.283 15:33:46 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:10:45.283 15:33:46 -- target/tls.sh@31 -- # waitforlisten 70054 /var/tmp/bdevperf.sock 00:10:45.283 15:33:46 -- common/autotest_common.sh@817 -- # '[' -z 70054 ']' 00:10:45.283 15:33:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:45.283 15:33:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:45.283 15:33:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:45.283 15:33:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:45.283 15:33:46 -- common/autotest_common.sh@10 -- # set +x 00:10:45.283 [2024-04-17 15:33:46.358129] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:10:45.283 [2024-04-17 15:33:46.358229] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70054 ] 00:10:45.283 [2024-04-17 15:33:46.493353] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.283 [2024-04-17 15:33:46.593442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:46.220 15:33:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:46.220 15:33:47 -- common/autotest_common.sh@850 -- # return 0 00:10:46.220 15:33:47 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6xo0MWX1dF 00:10:46.220 [2024-04-17 15:33:47.593636] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:10:46.220 [2024-04-17 15:33:47.594424] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:10:46.220 [2024-04-17 15:33:47.603535] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:10:46.220 [2024-04-17 15:33:47.604330] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1038a80 (107): Transport endpoint is not connected 00:10:46.220 [2024-04-17 15:33:47.605317] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1038a80 (9): Bad file descriptor 00:10:46.220 [2024-04-17 15:33:47.606314] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:10:46.220 [2024-04-17 15:33:47.606696] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:10:46.220 [2024-04-17 15:33:47.606974] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:10:46.220 request: 00:10:46.220 { 00:10:46.220 "name": "TLSTEST", 00:10:46.220 "trtype": "tcp", 00:10:46.220 "traddr": "10.0.0.2", 00:10:46.220 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:46.220 "adrfam": "ipv4", 00:10:46.220 "trsvcid": "4420", 00:10:46.220 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:46.220 "psk": "/tmp/tmp.6xo0MWX1dF", 00:10:46.220 "method": "bdev_nvme_attach_controller", 00:10:46.220 "req_id": 1 00:10:46.220 } 00:10:46.220 Got JSON-RPC error response 00:10:46.220 response: 00:10:46.220 { 00:10:46.220 "code": -32602, 00:10:46.220 "message": "Invalid parameters" 00:10:46.220 } 00:10:46.220 15:33:47 -- target/tls.sh@36 -- # killprocess 70054 00:10:46.220 15:33:47 -- common/autotest_common.sh@936 -- # '[' -z 70054 ']' 00:10:46.220 15:33:47 -- common/autotest_common.sh@940 -- # kill -0 70054 00:10:46.220 15:33:47 -- common/autotest_common.sh@941 -- # uname 00:10:46.220 15:33:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:46.220 15:33:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70054 00:10:46.479 killing process with pid 70054 00:10:46.479 Received shutdown signal, test time was about 10.000000 seconds 00:10:46.479 00:10:46.479 Latency(us) 00:10:46.479 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:46.479 =================================================================================================================== 00:10:46.479 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:10:46.479 15:33:47 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:10:46.479 15:33:47 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:10:46.479 15:33:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70054' 00:10:46.479 15:33:47 -- common/autotest_common.sh@955 -- # kill 70054 00:10:46.479 [2024-04-17 15:33:47.662828] app.c: 930:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:10:46.479 15:33:47 -- common/autotest_common.sh@960 -- # wait 70054 00:10:46.738 15:33:48 -- target/tls.sh@37 -- # return 1 00:10:46.738 15:33:48 -- common/autotest_common.sh@641 -- # es=1 00:10:46.738 15:33:48 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:46.738 15:33:48 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:46.738 15:33:48 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:46.738 15:33:48 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ytVAlZ6ezA 00:10:46.738 15:33:48 -- common/autotest_common.sh@638 -- # local es=0 00:10:46.738 15:33:48 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ytVAlZ6ezA 00:10:46.738 15:33:48 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:10:46.738 15:33:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:46.738 15:33:48 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:10:46.738 15:33:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:46.738 15:33:48 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ytVAlZ6ezA 00:10:46.738 15:33:48 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:10:46.738 15:33:48 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:10:46.738 15:33:48 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:10:46.738 15:33:48 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ytVAlZ6ezA' 00:10:46.738 15:33:48 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:46.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:46.738 15:33:48 -- target/tls.sh@28 -- # bdevperf_pid=70080 00:10:46.739 15:33:48 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:10:46.739 15:33:48 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:46.739 15:33:48 -- target/tls.sh@31 -- # waitforlisten 70080 /var/tmp/bdevperf.sock 00:10:46.739 15:33:48 -- common/autotest_common.sh@817 -- # '[' -z 70080 ']' 00:10:46.739 15:33:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:46.739 15:33:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:46.739 15:33:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:46.739 15:33:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:46.739 15:33:48 -- common/autotest_common.sh@10 -- # set +x 00:10:46.739 [2024-04-17 15:33:48.055414] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:10:46.739 [2024-04-17 15:33:48.056182] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70080 ] 00:10:47.003 [2024-04-17 15:33:48.191300] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.003 [2024-04-17 15:33:48.318631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:47.589 15:33:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:47.589 15:33:48 -- common/autotest_common.sh@850 -- # return 0 00:10:47.589 15:33:48 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.ytVAlZ6ezA 00:10:47.849 [2024-04-17 15:33:49.177751] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:10:47.849 [2024-04-17 15:33:49.177957] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:10:47.849 [2024-04-17 15:33:49.183681] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:10:47.849 [2024-04-17 15:33:49.183724] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:10:47.849 [2024-04-17 15:33:49.183831] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:10:47.849 [2024-04-17 15:33:49.183955] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1676a80 (107): Transport endpoint is not connected 00:10:47.849 [2024-04-17 15:33:49.184935] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1676a80 (9): Bad file descriptor 00:10:47.849 [2024-04-17 15:33:49.185941] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:10:47.849 [2024-04-17 15:33:49.185973] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:10:47.849 [2024-04-17 15:33:49.185991] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:10:47.849 request: 00:10:47.849 { 00:10:47.849 "name": "TLSTEST", 00:10:47.849 "trtype": "tcp", 00:10:47.849 "traddr": "10.0.0.2", 00:10:47.849 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:10:47.849 "adrfam": "ipv4", 00:10:47.849 "trsvcid": "4420", 00:10:47.849 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:47.849 "psk": "/tmp/tmp.ytVAlZ6ezA", 00:10:47.849 "method": "bdev_nvme_attach_controller", 00:10:47.849 "req_id": 1 00:10:47.849 } 00:10:47.849 Got JSON-RPC error response 00:10:47.849 response: 00:10:47.849 { 00:10:47.849 "code": -32602, 00:10:47.849 "message": "Invalid parameters" 00:10:47.849 } 00:10:47.849 15:33:49 -- target/tls.sh@36 -- # killprocess 70080 00:10:47.849 15:33:49 -- common/autotest_common.sh@936 -- # '[' -z 70080 ']' 00:10:47.849 15:33:49 -- common/autotest_common.sh@940 -- # kill -0 70080 00:10:47.849 15:33:49 -- common/autotest_common.sh@941 -- # uname 00:10:47.849 15:33:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:47.849 15:33:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70080 00:10:47.849 killing process with pid 70080 00:10:47.849 Received shutdown signal, test time was about 10.000000 seconds 00:10:47.849 00:10:47.849 Latency(us) 00:10:47.849 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:47.849 =================================================================================================================== 00:10:47.849 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:10:47.849 15:33:49 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:10:47.849 15:33:49 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:10:47.849 15:33:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70080' 00:10:47.849 15:33:49 -- common/autotest_common.sh@955 -- # kill 70080 00:10:47.849 [2024-04-17 15:33:49.230387] app.c: 930:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:10:47.849 15:33:49 -- common/autotest_common.sh@960 -- # wait 70080 00:10:48.417 15:33:49 -- target/tls.sh@37 -- # return 1 00:10:48.417 15:33:49 -- common/autotest_common.sh@641 -- # es=1 00:10:48.417 15:33:49 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:48.417 15:33:49 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:48.417 15:33:49 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:48.417 15:33:49 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ytVAlZ6ezA 00:10:48.417 15:33:49 -- common/autotest_common.sh@638 -- # local es=0 00:10:48.417 15:33:49 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ytVAlZ6ezA 00:10:48.417 15:33:49 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:10:48.417 15:33:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:48.417 15:33:49 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:10:48.417 15:33:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:48.417 15:33:49 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ytVAlZ6ezA 00:10:48.417 15:33:49 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:10:48.417 15:33:49 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:10:48.417 15:33:49 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:10:48.417 15:33:49 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ytVAlZ6ezA' 00:10:48.417 15:33:49 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:48.417 15:33:49 -- target/tls.sh@28 -- # bdevperf_pid=70111 00:10:48.417 15:33:49 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:48.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:48.417 15:33:49 -- target/tls.sh@31 -- # waitforlisten 70111 /var/tmp/bdevperf.sock 00:10:48.417 15:33:49 -- common/autotest_common.sh@817 -- # '[' -z 70111 ']' 00:10:48.417 15:33:49 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:10:48.417 15:33:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:48.417 15:33:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:48.417 15:33:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:48.417 15:33:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:48.417 15:33:49 -- common/autotest_common.sh@10 -- # set +x 00:10:48.417 [2024-04-17 15:33:49.603208] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:10:48.417 [2024-04-17 15:33:49.603295] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70111 ] 00:10:48.417 [2024-04-17 15:33:49.736485] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.676 [2024-04-17 15:33:49.867176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:49.244 15:33:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:49.244 15:33:50 -- common/autotest_common.sh@850 -- # return 0 00:10:49.245 15:33:50 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ytVAlZ6ezA 00:10:49.504 [2024-04-17 15:33:50.774664] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:10:49.504 [2024-04-17 15:33:50.775421] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:10:49.504 [2024-04-17 15:33:50.784066] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:10:49.504 [2024-04-17 15:33:50.784366] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:10:49.504 [2024-04-17 15:33:50.784708] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:10:49.504 [2024-04-17 15:33:50.785164] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fc6a80 (107): Transport endpoint is not connected 00:10:49.504 [2024-04-17 15:33:50.786142] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fc6a80 (9): Bad file descriptor 00:10:49.504 [2024-04-17 15:33:50.787137] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:10:49.504 [2024-04-17 15:33:50.787509] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:10:49.504 [2024-04-17 15:33:50.787810] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:10:49.504 request: 00:10:49.504 { 00:10:49.504 "name": "TLSTEST", 00:10:49.504 "trtype": "tcp", 00:10:49.504 "traddr": "10.0.0.2", 00:10:49.504 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:49.504 "adrfam": "ipv4", 00:10:49.504 "trsvcid": "4420", 00:10:49.504 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:10:49.504 "psk": "/tmp/tmp.ytVAlZ6ezA", 00:10:49.504 "method": "bdev_nvme_attach_controller", 00:10:49.504 "req_id": 1 00:10:49.504 } 00:10:49.504 Got JSON-RPC error response 00:10:49.504 response: 00:10:49.504 { 00:10:49.504 "code": -32602, 00:10:49.504 "message": "Invalid parameters" 00:10:49.504 } 00:10:49.504 15:33:50 -- target/tls.sh@36 -- # killprocess 70111 00:10:49.504 15:33:50 -- common/autotest_common.sh@936 -- # '[' -z 70111 ']' 00:10:49.504 15:33:50 -- common/autotest_common.sh@940 -- # kill -0 70111 00:10:49.504 15:33:50 -- common/autotest_common.sh@941 -- # uname 00:10:49.504 15:33:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:49.504 15:33:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70111 00:10:49.504 15:33:50 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:10:49.504 15:33:50 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:10:49.504 killing process with pid 70111 00:10:49.504 15:33:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70111' 00:10:49.504 15:33:50 -- common/autotest_common.sh@955 -- # kill 70111 00:10:49.504 Received shutdown signal, test time was about 10.000000 seconds 00:10:49.504 00:10:49.504 Latency(us) 00:10:49.504 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:49.504 =================================================================================================================== 00:10:49.504 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:10:49.504 [2024-04-17 15:33:50.841541] app.c: 930:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:10:49.504 15:33:50 -- common/autotest_common.sh@960 -- # wait 70111 00:10:49.763 15:33:51 -- target/tls.sh@37 -- # return 1 00:10:49.763 15:33:51 -- common/autotest_common.sh@641 -- # es=1 00:10:49.763 15:33:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:49.763 15:33:51 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:49.763 15:33:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:49.763 15:33:51 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:10:49.763 15:33:51 -- common/autotest_common.sh@638 -- # local es=0 00:10:49.763 15:33:51 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:10:49.763 15:33:51 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:10:49.763 15:33:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:49.763 15:33:51 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:10:49.763 15:33:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:49.763 15:33:51 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:10:49.763 15:33:51 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:10:49.763 15:33:51 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:10:49.763 15:33:51 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:10:49.763 15:33:51 -- target/tls.sh@23 -- # psk= 00:10:49.763 15:33:51 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:49.763 15:33:51 -- target/tls.sh@28 -- # bdevperf_pid=70133 00:10:49.763 15:33:51 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:10:49.763 15:33:51 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:49.763 15:33:51 -- target/tls.sh@31 -- # waitforlisten 70133 /var/tmp/bdevperf.sock 00:10:49.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:49.763 15:33:51 -- common/autotest_common.sh@817 -- # '[' -z 70133 ']' 00:10:49.763 15:33:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:49.763 15:33:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:49.763 15:33:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:49.763 15:33:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:49.763 15:33:51 -- common/autotest_common.sh@10 -- # set +x 00:10:49.763 [2024-04-17 15:33:51.143433] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:10:49.763 [2024-04-17 15:33:51.143531] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70133 ] 00:10:50.023 [2024-04-17 15:33:51.282453] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.023 [2024-04-17 15:33:51.390993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:50.959 15:33:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:50.959 15:33:52 -- common/autotest_common.sh@850 -- # return 0 00:10:50.959 15:33:52 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:10:50.959 [2024-04-17 15:33:52.244300] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:10:50.959 [2024-04-17 15:33:52.246421] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x101adc0 (9): Bad file descriptor 00:10:50.959 [2024-04-17 15:33:52.247399] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:10:50.959 [2024-04-17 15:33:52.247986] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:10:50.959 [2024-04-17 15:33:52.248012] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:10:50.959 request: 00:10:50.959 { 00:10:50.959 "name": "TLSTEST", 00:10:50.959 "trtype": "tcp", 00:10:50.959 "traddr": "10.0.0.2", 00:10:50.959 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:50.959 "adrfam": "ipv4", 00:10:50.959 "trsvcid": "4420", 00:10:50.959 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:50.959 "method": "bdev_nvme_attach_controller", 00:10:50.959 "req_id": 1 00:10:50.959 } 00:10:50.959 Got JSON-RPC error response 00:10:50.959 response: 00:10:50.959 { 00:10:50.959 "code": -32602, 00:10:50.959 "message": "Invalid parameters" 00:10:50.959 } 00:10:50.959 15:33:52 -- target/tls.sh@36 -- # killprocess 70133 00:10:50.959 15:33:52 -- common/autotest_common.sh@936 -- # '[' -z 70133 ']' 00:10:50.959 15:33:52 -- common/autotest_common.sh@940 -- # kill -0 70133 00:10:50.959 15:33:52 -- common/autotest_common.sh@941 -- # uname 00:10:50.959 15:33:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:50.959 15:33:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70133 00:10:50.959 killing process with pid 70133 00:10:50.959 Received shutdown signal, test time was about 10.000000 seconds 00:10:50.959 00:10:50.959 Latency(us) 00:10:50.959 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:50.959 =================================================================================================================== 00:10:50.959 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:10:50.959 15:33:52 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:10:50.959 15:33:52 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:10:50.959 15:33:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70133' 00:10:50.959 15:33:52 -- common/autotest_common.sh@955 -- # kill 70133 00:10:50.959 15:33:52 -- common/autotest_common.sh@960 -- # wait 70133 00:10:51.218 15:33:52 -- target/tls.sh@37 -- # return 1 00:10:51.219 15:33:52 -- common/autotest_common.sh@641 -- # es=1 00:10:51.219 15:33:52 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:51.219 15:33:52 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:51.219 15:33:52 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:51.219 15:33:52 -- target/tls.sh@158 -- # killprocess 69683 00:10:51.219 15:33:52 -- common/autotest_common.sh@936 -- # '[' -z 69683 ']' 00:10:51.219 15:33:52 -- common/autotest_common.sh@940 -- # kill -0 69683 00:10:51.219 15:33:52 -- common/autotest_common.sh@941 -- # uname 00:10:51.219 15:33:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:51.219 15:33:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69683 00:10:51.478 killing process with pid 69683 00:10:51.478 15:33:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:51.478 15:33:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:51.478 15:33:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69683' 00:10:51.478 15:33:52 -- common/autotest_common.sh@955 -- # kill 69683 00:10:51.478 [2024-04-17 15:33:52.675988] app.c: 930:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:10:51.478 15:33:52 -- common/autotest_common.sh@960 -- # wait 69683 00:10:51.737 15:33:53 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:10:51.737 15:33:53 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:10:51.737 15:33:53 -- nvmf/common.sh@691 -- # local prefix key digest 00:10:51.737 15:33:53 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:10:51.737 15:33:53 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:10:51.737 15:33:53 -- nvmf/common.sh@693 -- # digest=2 00:10:51.737 15:33:53 -- nvmf/common.sh@694 -- # python - 00:10:51.737 15:33:53 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:10:51.737 15:33:53 -- target/tls.sh@160 -- # mktemp 00:10:51.737 15:33:53 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.9RiozTkANn 00:10:51.737 15:33:53 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:10:51.737 15:33:53 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.9RiozTkANn 00:10:51.737 15:33:53 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:10:51.737 15:33:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:51.737 15:33:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:51.737 15:33:53 -- common/autotest_common.sh@10 -- # set +x 00:10:51.737 15:33:53 -- nvmf/common.sh@470 -- # nvmfpid=70176 00:10:51.737 15:33:53 -- nvmf/common.sh@471 -- # waitforlisten 70176 00:10:51.737 15:33:53 -- common/autotest_common.sh@817 -- # '[' -z 70176 ']' 00:10:51.737 15:33:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.737 15:33:53 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:51.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.737 15:33:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:51.737 15:33:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.737 15:33:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:51.737 15:33:53 -- common/autotest_common.sh@10 -- # set +x 00:10:51.737 [2024-04-17 15:33:53.137632] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:10:51.737 [2024-04-17 15:33:53.137735] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:51.996 [2024-04-17 15:33:53.270909] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.996 [2024-04-17 15:33:53.408986] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:51.996 [2024-04-17 15:33:53.409041] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:51.996 [2024-04-17 15:33:53.409070] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:51.996 [2024-04-17 15:33:53.409078] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:51.996 [2024-04-17 15:33:53.409085] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:51.996 [2024-04-17 15:33:53.409112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.933 15:33:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:52.933 15:33:54 -- common/autotest_common.sh@850 -- # return 0 00:10:52.933 15:33:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:52.933 15:33:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:52.933 15:33:54 -- common/autotest_common.sh@10 -- # set +x 00:10:52.933 15:33:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:52.933 15:33:54 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.9RiozTkANn 00:10:52.934 15:33:54 -- target/tls.sh@49 -- # local key=/tmp/tmp.9RiozTkANn 00:10:52.934 15:33:54 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:10:53.192 [2024-04-17 15:33:54.440566] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:53.193 15:33:54 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:10:53.456 15:33:54 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:10:53.725 [2024-04-17 15:33:54.936678] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:10:53.725 [2024-04-17 15:33:54.937014] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:53.725 15:33:54 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:10:53.984 malloc0 00:10:53.984 15:33:55 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:53.984 15:33:55 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9RiozTkANn 00:10:54.243 [2024-04-17 15:33:55.599207] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:10:54.243 15:33:55 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9RiozTkANn 00:10:54.243 15:33:55 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:10:54.243 15:33:55 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:10:54.243 15:33:55 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:10:54.243 15:33:55 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.9RiozTkANn' 00:10:54.243 15:33:55 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:54.243 15:33:55 -- target/tls.sh@28 -- # bdevperf_pid=70231 00:10:54.243 15:33:55 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:10:54.243 15:33:55 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:54.243 15:33:55 -- target/tls.sh@31 -- # waitforlisten 70231 /var/tmp/bdevperf.sock 00:10:54.243 15:33:55 -- common/autotest_common.sh@817 -- # '[' -z 70231 ']' 00:10:54.243 15:33:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:54.243 15:33:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:54.243 15:33:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:54.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:54.243 15:33:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:54.243 15:33:55 -- common/autotest_common.sh@10 -- # set +x 00:10:54.243 [2024-04-17 15:33:55.665395] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:10:54.243 [2024-04-17 15:33:55.665678] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70231 ] 00:10:54.502 [2024-04-17 15:33:55.800542] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.760 [2024-04-17 15:33:55.953228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:55.328 15:33:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:55.328 15:33:56 -- common/autotest_common.sh@850 -- # return 0 00:10:55.328 15:33:56 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9RiozTkANn 00:10:55.328 [2024-04-17 15:33:56.745769] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:10:55.328 [2024-04-17 15:33:56.747238] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:10:55.587 TLSTESTn1 00:10:55.587 15:33:56 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:10:55.587 Running I/O for 10 seconds... 00:11:05.562 00:11:05.562 Latency(us) 00:11:05.562 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:05.562 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:11:05.562 Verification LBA range: start 0x0 length 0x2000 00:11:05.562 TLSTESTn1 : 10.02 4075.00 15.92 0.00 0.00 31348.11 8460.10 21448.15 00:11:05.562 =================================================================================================================== 00:11:05.562 Total : 4075.00 15.92 0.00 0.00 31348.11 8460.10 21448.15 00:11:05.562 0 00:11:05.562 15:34:06 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:05.562 15:34:06 -- target/tls.sh@45 -- # killprocess 70231 00:11:05.562 15:34:06 -- common/autotest_common.sh@936 -- # '[' -z 70231 ']' 00:11:05.562 15:34:06 -- common/autotest_common.sh@940 -- # kill -0 70231 00:11:05.562 15:34:06 -- common/autotest_common.sh@941 -- # uname 00:11:05.562 15:34:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:05.562 15:34:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70231 00:11:05.562 15:34:06 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:05.562 killing process with pid 70231 00:11:05.562 Received shutdown signal, test time was about 10.000000 seconds 00:11:05.562 00:11:05.562 Latency(us) 00:11:05.562 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:05.562 =================================================================================================================== 00:11:05.562 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:05.562 15:34:06 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:05.562 15:34:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70231' 00:11:05.562 15:34:06 -- common/autotest_common.sh@955 -- # kill 70231 00:11:05.562 [2024-04-17 15:34:06.997018] app.c: 930:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:11:05.562 15:34:06 -- common/autotest_common.sh@960 -- # wait 70231 00:11:06.129 15:34:07 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.9RiozTkANn 00:11:06.129 15:34:07 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9RiozTkANn 00:11:06.129 15:34:07 -- common/autotest_common.sh@638 -- # local es=0 00:11:06.129 15:34:07 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9RiozTkANn 00:11:06.129 15:34:07 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:11:06.129 15:34:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:06.129 15:34:07 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:11:06.129 15:34:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:06.129 15:34:07 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9RiozTkANn 00:11:06.129 15:34:07 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:06.130 15:34:07 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:06.130 15:34:07 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:06.130 15:34:07 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.9RiozTkANn' 00:11:06.130 15:34:07 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:06.130 15:34:07 -- target/tls.sh@28 -- # bdevperf_pid=70365 00:11:06.130 15:34:07 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:06.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:06.130 15:34:07 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:06.130 15:34:07 -- target/tls.sh@31 -- # waitforlisten 70365 /var/tmp/bdevperf.sock 00:11:06.130 15:34:07 -- common/autotest_common.sh@817 -- # '[' -z 70365 ']' 00:11:06.130 15:34:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:06.130 15:34:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:06.130 15:34:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:06.130 15:34:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:06.130 15:34:07 -- common/autotest_common.sh@10 -- # set +x 00:11:06.130 [2024-04-17 15:34:07.400028] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:11:06.130 [2024-04-17 15:34:07.400137] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70365 ] 00:11:06.130 [2024-04-17 15:34:07.539382] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.389 [2024-04-17 15:34:07.663436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:06.955 15:34:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:06.955 15:34:08 -- common/autotest_common.sh@850 -- # return 0 00:11:06.955 15:34:08 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9RiozTkANn 00:11:07.213 [2024-04-17 15:34:08.534423] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:07.213 [2024-04-17 15:34:08.535144] bdev_nvme.c:6046:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:11:07.213 [2024-04-17 15:34:08.535395] bdev_nvme.c:6155:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.9RiozTkANn 00:11:07.213 request: 00:11:07.213 { 00:11:07.213 "name": "TLSTEST", 00:11:07.214 "trtype": "tcp", 00:11:07.214 "traddr": "10.0.0.2", 00:11:07.214 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:07.214 "adrfam": "ipv4", 00:11:07.214 "trsvcid": "4420", 00:11:07.214 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:07.214 "psk": "/tmp/tmp.9RiozTkANn", 00:11:07.214 "method": "bdev_nvme_attach_controller", 00:11:07.214 "req_id": 1 00:11:07.214 } 00:11:07.214 Got JSON-RPC error response 00:11:07.214 response: 00:11:07.214 { 00:11:07.214 "code": -1, 00:11:07.214 "message": "Operation not permitted" 00:11:07.214 } 00:11:07.214 15:34:08 -- target/tls.sh@36 -- # killprocess 70365 00:11:07.214 15:34:08 -- common/autotest_common.sh@936 -- # '[' -z 70365 ']' 00:11:07.214 15:34:08 -- common/autotest_common.sh@940 -- # kill -0 70365 00:11:07.214 15:34:08 -- common/autotest_common.sh@941 -- # uname 00:11:07.214 15:34:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:07.214 15:34:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70365 00:11:07.214 15:34:08 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:07.214 killing process with pid 70365 00:11:07.214 Received shutdown signal, test time was about 10.000000 seconds 00:11:07.214 00:11:07.214 Latency(us) 00:11:07.214 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:07.214 =================================================================================================================== 00:11:07.214 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:07.214 15:34:08 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:07.214 15:34:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70365' 00:11:07.214 15:34:08 -- common/autotest_common.sh@955 -- # kill 70365 00:11:07.214 15:34:08 -- common/autotest_common.sh@960 -- # wait 70365 00:11:07.782 15:34:08 -- target/tls.sh@37 -- # return 1 00:11:07.782 15:34:08 -- common/autotest_common.sh@641 -- # es=1 00:11:07.782 15:34:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:07.782 15:34:08 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:07.782 15:34:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:07.782 15:34:08 -- target/tls.sh@174 -- # killprocess 70176 00:11:07.782 15:34:08 -- common/autotest_common.sh@936 -- # '[' -z 70176 ']' 00:11:07.782 15:34:08 -- common/autotest_common.sh@940 -- # kill -0 70176 00:11:07.782 15:34:08 -- common/autotest_common.sh@941 -- # uname 00:11:07.782 15:34:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:07.782 15:34:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70176 00:11:07.782 killing process with pid 70176 00:11:07.782 15:34:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:07.782 15:34:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:07.782 15:34:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70176' 00:11:07.782 15:34:08 -- common/autotest_common.sh@955 -- # kill 70176 00:11:07.782 [2024-04-17 15:34:08.954631] app.c: 930:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:11:07.782 15:34:08 -- common/autotest_common.sh@960 -- # wait 70176 00:11:08.041 15:34:09 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:11:08.041 15:34:09 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:08.041 15:34:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:08.041 15:34:09 -- common/autotest_common.sh@10 -- # set +x 00:11:08.041 15:34:09 -- nvmf/common.sh@470 -- # nvmfpid=70403 00:11:08.041 15:34:09 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:08.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.041 15:34:09 -- nvmf/common.sh@471 -- # waitforlisten 70403 00:11:08.041 15:34:09 -- common/autotest_common.sh@817 -- # '[' -z 70403 ']' 00:11:08.041 15:34:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.041 15:34:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:08.041 15:34:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.041 15:34:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:08.041 15:34:09 -- common/autotest_common.sh@10 -- # set +x 00:11:08.041 [2024-04-17 15:34:09.358981] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:11:08.041 [2024-04-17 15:34:09.359273] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:08.299 [2024-04-17 15:34:09.490154] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.299 [2024-04-17 15:34:09.610677] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:08.299 [2024-04-17 15:34:09.611060] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:08.299 [2024-04-17 15:34:09.611244] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:08.299 [2024-04-17 15:34:09.611394] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:08.299 [2024-04-17 15:34:09.611428] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:08.299 [2024-04-17 15:34:09.611546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.865 15:34:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:08.865 15:34:10 -- common/autotest_common.sh@850 -- # return 0 00:11:08.865 15:34:10 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:08.865 15:34:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:08.865 15:34:10 -- common/autotest_common.sh@10 -- # set +x 00:11:09.148 15:34:10 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:09.148 15:34:10 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.9RiozTkANn 00:11:09.148 15:34:10 -- common/autotest_common.sh@638 -- # local es=0 00:11:09.148 15:34:10 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.9RiozTkANn 00:11:09.148 15:34:10 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:11:09.148 15:34:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:09.148 15:34:10 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:11:09.148 15:34:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:09.148 15:34:10 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /tmp/tmp.9RiozTkANn 00:11:09.148 15:34:10 -- target/tls.sh@49 -- # local key=/tmp/tmp.9RiozTkANn 00:11:09.148 15:34:10 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:11:09.429 [2024-04-17 15:34:10.580424] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:09.429 15:34:10 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:11:09.429 15:34:10 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:11:09.687 [2024-04-17 15:34:11.064506] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:09.687 [2024-04-17 15:34:11.064827] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:09.687 15:34:11 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:11:09.944 malloc0 00:11:09.944 15:34:11 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:10.202 15:34:11 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9RiozTkANn 00:11:10.460 [2024-04-17 15:34:11.775220] tcp.c:3562:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:11:10.460 [2024-04-17 15:34:11.775274] tcp.c:3648:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:11:10.460 [2024-04-17 15:34:11.775315] subsystem.c: 967:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:11:10.460 request: 00:11:10.460 { 00:11:10.460 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:10.460 "host": "nqn.2016-06.io.spdk:host1", 00:11:10.460 "psk": "/tmp/tmp.9RiozTkANn", 00:11:10.460 "method": "nvmf_subsystem_add_host", 00:11:10.460 "req_id": 1 00:11:10.460 } 00:11:10.460 Got JSON-RPC error response 00:11:10.460 response: 00:11:10.460 { 00:11:10.460 "code": -32603, 00:11:10.460 "message": "Internal error" 00:11:10.460 } 00:11:10.460 15:34:11 -- common/autotest_common.sh@641 -- # es=1 00:11:10.460 15:34:11 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:10.460 15:34:11 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:10.460 15:34:11 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:10.460 15:34:11 -- target/tls.sh@180 -- # killprocess 70403 00:11:10.460 15:34:11 -- common/autotest_common.sh@936 -- # '[' -z 70403 ']' 00:11:10.460 15:34:11 -- common/autotest_common.sh@940 -- # kill -0 70403 00:11:10.460 15:34:11 -- common/autotest_common.sh@941 -- # uname 00:11:10.460 15:34:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:10.460 15:34:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70403 00:11:10.460 killing process with pid 70403 00:11:10.460 15:34:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:10.461 15:34:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:10.461 15:34:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70403' 00:11:10.461 15:34:11 -- common/autotest_common.sh@955 -- # kill 70403 00:11:10.461 15:34:11 -- common/autotest_common.sh@960 -- # wait 70403 00:11:11.025 15:34:12 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.9RiozTkANn 00:11:11.025 15:34:12 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:11:11.025 15:34:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:11.025 15:34:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:11.025 15:34:12 -- common/autotest_common.sh@10 -- # set +x 00:11:11.025 15:34:12 -- nvmf/common.sh@470 -- # nvmfpid=70466 00:11:11.025 15:34:12 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:11.025 15:34:12 -- nvmf/common.sh@471 -- # waitforlisten 70466 00:11:11.025 15:34:12 -- common/autotest_common.sh@817 -- # '[' -z 70466 ']' 00:11:11.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.025 15:34:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.025 15:34:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:11.025 15:34:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.025 15:34:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:11.025 15:34:12 -- common/autotest_common.sh@10 -- # set +x 00:11:11.025 [2024-04-17 15:34:12.237297] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:11:11.025 [2024-04-17 15:34:12.237420] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:11.025 [2024-04-17 15:34:12.371037] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.282 [2024-04-17 15:34:12.500316] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:11.282 [2024-04-17 15:34:12.500393] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:11.282 [2024-04-17 15:34:12.500420] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:11.282 [2024-04-17 15:34:12.500428] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:11.282 [2024-04-17 15:34:12.500435] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:11.282 [2024-04-17 15:34:12.500463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:11.849 15:34:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:11.849 15:34:13 -- common/autotest_common.sh@850 -- # return 0 00:11:11.849 15:34:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:11.849 15:34:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:11.849 15:34:13 -- common/autotest_common.sh@10 -- # set +x 00:11:11.849 15:34:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:11.849 15:34:13 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.9RiozTkANn 00:11:11.849 15:34:13 -- target/tls.sh@49 -- # local key=/tmp/tmp.9RiozTkANn 00:11:11.849 15:34:13 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:11:12.107 [2024-04-17 15:34:13.500061] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:12.107 15:34:13 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:11:12.365 15:34:13 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:11:12.623 [2024-04-17 15:34:13.952141] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:12.623 [2024-04-17 15:34:13.952703] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:12.623 15:34:13 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:11:12.881 malloc0 00:11:12.881 15:34:14 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:13.139 15:34:14 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9RiozTkANn 00:11:13.399 [2024-04-17 15:34:14.650230] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:11:13.399 15:34:14 -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:13.399 15:34:14 -- target/tls.sh@188 -- # bdevperf_pid=70515 00:11:13.399 15:34:14 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:13.399 15:34:14 -- target/tls.sh@191 -- # waitforlisten 70515 /var/tmp/bdevperf.sock 00:11:13.399 15:34:14 -- common/autotest_common.sh@817 -- # '[' -z 70515 ']' 00:11:13.399 15:34:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:13.399 15:34:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:13.400 15:34:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:13.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:13.400 15:34:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:13.400 15:34:14 -- common/autotest_common.sh@10 -- # set +x 00:11:13.400 [2024-04-17 15:34:14.708051] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:11:13.400 [2024-04-17 15:34:14.708138] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70515 ] 00:11:13.658 [2024-04-17 15:34:14.843808] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.658 [2024-04-17 15:34:14.980241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:14.225 15:34:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:14.225 15:34:15 -- common/autotest_common.sh@850 -- # return 0 00:11:14.225 15:34:15 -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9RiozTkANn 00:11:14.485 [2024-04-17 15:34:15.839138] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:14.485 [2024-04-17 15:34:15.839729] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:11:14.485 TLSTESTn1 00:11:14.743 15:34:15 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:15.002 15:34:16 -- target/tls.sh@196 -- # tgtconf='{ 00:11:15.002 "subsystems": [ 00:11:15.002 { 00:11:15.002 "subsystem": "keyring", 00:11:15.002 "config": [] 00:11:15.002 }, 00:11:15.002 { 00:11:15.002 "subsystem": "iobuf", 00:11:15.002 "config": [ 00:11:15.002 { 00:11:15.002 "method": "iobuf_set_options", 00:11:15.002 "params": { 00:11:15.002 "small_pool_count": 8192, 00:11:15.002 "large_pool_count": 1024, 00:11:15.002 "small_bufsize": 8192, 00:11:15.002 "large_bufsize": 135168 00:11:15.002 } 00:11:15.002 } 00:11:15.002 ] 00:11:15.002 }, 00:11:15.002 { 00:11:15.002 "subsystem": "sock", 00:11:15.002 "config": [ 00:11:15.002 { 00:11:15.002 "method": "sock_impl_set_options", 00:11:15.002 "params": { 00:11:15.002 "impl_name": "uring", 00:11:15.002 "recv_buf_size": 2097152, 00:11:15.002 "send_buf_size": 2097152, 00:11:15.002 "enable_recv_pipe": true, 00:11:15.002 "enable_quickack": false, 00:11:15.002 "enable_placement_id": 0, 00:11:15.002 "enable_zerocopy_send_server": false, 00:11:15.002 "enable_zerocopy_send_client": false, 00:11:15.002 "zerocopy_threshold": 0, 00:11:15.002 "tls_version": 0, 00:11:15.002 "enable_ktls": false 00:11:15.002 } 00:11:15.002 }, 00:11:15.002 { 00:11:15.002 "method": "sock_impl_set_options", 00:11:15.002 "params": { 00:11:15.002 "impl_name": "posix", 00:11:15.002 "recv_buf_size": 2097152, 00:11:15.002 "send_buf_size": 2097152, 00:11:15.002 "enable_recv_pipe": true, 00:11:15.002 "enable_quickack": false, 00:11:15.002 "enable_placement_id": 0, 00:11:15.002 "enable_zerocopy_send_server": true, 00:11:15.002 "enable_zerocopy_send_client": false, 00:11:15.002 "zerocopy_threshold": 0, 00:11:15.002 "tls_version": 0, 00:11:15.002 "enable_ktls": false 00:11:15.002 } 00:11:15.002 }, 00:11:15.002 { 00:11:15.002 "method": "sock_impl_set_options", 00:11:15.002 "params": { 00:11:15.002 "impl_name": "ssl", 00:11:15.002 "recv_buf_size": 4096, 00:11:15.002 "send_buf_size": 4096, 00:11:15.002 "enable_recv_pipe": true, 00:11:15.002 "enable_quickack": false, 00:11:15.002 "enable_placement_id": 0, 00:11:15.002 "enable_zerocopy_send_server": true, 00:11:15.002 "enable_zerocopy_send_client": false, 00:11:15.002 "zerocopy_threshold": 0, 00:11:15.002 "tls_version": 0, 00:11:15.002 "enable_ktls": false 00:11:15.002 } 00:11:15.002 } 00:11:15.002 ] 00:11:15.002 }, 00:11:15.002 { 00:11:15.002 "subsystem": "vmd", 00:11:15.002 "config": [] 00:11:15.002 }, 00:11:15.002 { 00:11:15.002 "subsystem": "accel", 00:11:15.002 "config": [ 00:11:15.002 { 00:11:15.002 "method": "accel_set_options", 00:11:15.002 "params": { 00:11:15.002 "small_cache_size": 128, 00:11:15.002 "large_cache_size": 16, 00:11:15.002 "task_count": 2048, 00:11:15.002 "sequence_count": 2048, 00:11:15.002 "buf_count": 2048 00:11:15.002 } 00:11:15.002 } 00:11:15.002 ] 00:11:15.002 }, 00:11:15.002 { 00:11:15.002 "subsystem": "bdev", 00:11:15.002 "config": [ 00:11:15.002 { 00:11:15.002 "method": "bdev_set_options", 00:11:15.002 "params": { 00:11:15.002 "bdev_io_pool_size": 65535, 00:11:15.002 "bdev_io_cache_size": 256, 00:11:15.002 "bdev_auto_examine": true, 00:11:15.002 "iobuf_small_cache_size": 128, 00:11:15.002 "iobuf_large_cache_size": 16 00:11:15.002 } 00:11:15.002 }, 00:11:15.002 { 00:11:15.002 "method": "bdev_raid_set_options", 00:11:15.002 "params": { 00:11:15.002 "process_window_size_kb": 1024 00:11:15.002 } 00:11:15.002 }, 00:11:15.002 { 00:11:15.002 "method": "bdev_iscsi_set_options", 00:11:15.002 "params": { 00:11:15.002 "timeout_sec": 30 00:11:15.002 } 00:11:15.002 }, 00:11:15.002 { 00:11:15.002 "method": "bdev_nvme_set_options", 00:11:15.002 "params": { 00:11:15.002 "action_on_timeout": "none", 00:11:15.002 "timeout_us": 0, 00:11:15.002 "timeout_admin_us": 0, 00:11:15.002 "keep_alive_timeout_ms": 10000, 00:11:15.002 "arbitration_burst": 0, 00:11:15.002 "low_priority_weight": 0, 00:11:15.002 "medium_priority_weight": 0, 00:11:15.002 "high_priority_weight": 0, 00:11:15.002 "nvme_adminq_poll_period_us": 10000, 00:11:15.002 "nvme_ioq_poll_period_us": 0, 00:11:15.002 "io_queue_requests": 0, 00:11:15.002 "delay_cmd_submit": true, 00:11:15.002 "transport_retry_count": 4, 00:11:15.002 "bdev_retry_count": 3, 00:11:15.002 "transport_ack_timeout": 0, 00:11:15.002 "ctrlr_loss_timeout_sec": 0, 00:11:15.002 "reconnect_delay_sec": 0, 00:11:15.002 "fast_io_fail_timeout_sec": 0, 00:11:15.002 "disable_auto_failback": false, 00:11:15.002 "generate_uuids": false, 00:11:15.002 "transport_tos": 0, 00:11:15.002 "nvme_error_stat": false, 00:11:15.002 "rdma_srq_size": 0, 00:11:15.002 "io_path_stat": false, 00:11:15.002 "allow_accel_sequence": false, 00:11:15.002 "rdma_max_cq_size": 0, 00:11:15.002 "rdma_cm_event_timeout_ms": 0, 00:11:15.002 "dhchap_digests": [ 00:11:15.002 "sha256", 00:11:15.002 "sha384", 00:11:15.002 "sha512" 00:11:15.002 ], 00:11:15.002 "dhchap_dhgroups": [ 00:11:15.002 "null", 00:11:15.002 "ffdhe2048", 00:11:15.002 "ffdhe3072", 00:11:15.002 "ffdhe4096", 00:11:15.002 "ffdhe6144", 00:11:15.002 "ffdhe8192" 00:11:15.002 ] 00:11:15.002 } 00:11:15.002 }, 00:11:15.002 { 00:11:15.002 "method": "bdev_nvme_set_hotplug", 00:11:15.002 "params": { 00:11:15.002 "period_us": 100000, 00:11:15.002 "enable": false 00:11:15.002 } 00:11:15.002 }, 00:11:15.002 { 00:11:15.002 "method": "bdev_malloc_create", 00:11:15.002 "params": { 00:11:15.002 "name": "malloc0", 00:11:15.002 "num_blocks": 8192, 00:11:15.002 "block_size": 4096, 00:11:15.002 "physical_block_size": 4096, 00:11:15.002 "uuid": "8dfd452f-bb84-48fd-93af-780ea6985f98", 00:11:15.002 "optimal_io_boundary": 0 00:11:15.002 } 00:11:15.002 }, 00:11:15.002 { 00:11:15.002 "method": "bdev_wait_for_examine" 00:11:15.002 } 00:11:15.002 ] 00:11:15.002 }, 00:11:15.002 { 00:11:15.002 "subsystem": "nbd", 00:11:15.002 "config": [] 00:11:15.002 }, 00:11:15.002 { 00:11:15.002 "subsystem": "scheduler", 00:11:15.002 "config": [ 00:11:15.002 { 00:11:15.002 "method": "framework_set_scheduler", 00:11:15.002 "params": { 00:11:15.002 "name": "static" 00:11:15.002 } 00:11:15.002 } 00:11:15.002 ] 00:11:15.002 }, 00:11:15.002 { 00:11:15.002 "subsystem": "nvmf", 00:11:15.002 "config": [ 00:11:15.002 { 00:11:15.002 "method": "nvmf_set_config", 00:11:15.002 "params": { 00:11:15.002 "discovery_filter": "match_any", 00:11:15.002 "admin_cmd_passthru": { 00:11:15.002 "identify_ctrlr": false 00:11:15.002 } 00:11:15.002 } 00:11:15.002 }, 00:11:15.002 { 00:11:15.002 "method": "nvmf_set_max_subsystems", 00:11:15.002 "params": { 00:11:15.002 "max_subsystems": 1024 00:11:15.002 } 00:11:15.002 }, 00:11:15.002 { 00:11:15.002 "method": "nvmf_set_crdt", 00:11:15.002 "params": { 00:11:15.002 "crdt1": 0, 00:11:15.002 "crdt2": 0, 00:11:15.002 "crdt3": 0 00:11:15.002 } 00:11:15.002 }, 00:11:15.002 { 00:11:15.002 "method": "nvmf_create_transport", 00:11:15.002 "params": { 00:11:15.002 "trtype": "TCP", 00:11:15.002 "max_queue_depth": 128, 00:11:15.002 "max_io_qpairs_per_ctrlr": 127, 00:11:15.002 "in_capsule_data_size": 4096, 00:11:15.002 "max_io_size": 131072, 00:11:15.002 "io_unit_size": 131072, 00:11:15.002 "max_aq_depth": 128, 00:11:15.002 "num_shared_buffers": 511, 00:11:15.002 "buf_cache_size": 4294967295, 00:11:15.002 "dif_insert_or_strip": false, 00:11:15.002 "zcopy": false, 00:11:15.002 "c2h_success": false, 00:11:15.002 "sock_priority": 0, 00:11:15.002 "abort_timeout_sec": 1, 00:11:15.002 "ack_timeout": 0 00:11:15.002 } 00:11:15.002 }, 00:11:15.002 { 00:11:15.002 "method": "nvmf_create_subsystem", 00:11:15.002 "params": { 00:11:15.002 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:15.002 "allow_any_host": false, 00:11:15.002 "serial_number": "SPDK00000000000001", 00:11:15.002 "model_number": "SPDK bdev Controller", 00:11:15.002 "max_namespaces": 10, 00:11:15.002 "min_cntlid": 1, 00:11:15.002 "max_cntlid": 65519, 00:11:15.002 "ana_reporting": false 00:11:15.002 } 00:11:15.002 }, 00:11:15.002 { 00:11:15.002 "method": "nvmf_subsystem_add_host", 00:11:15.002 "params": { 00:11:15.002 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:15.002 "host": "nqn.2016-06.io.spdk:host1", 00:11:15.002 "psk": "/tmp/tmp.9RiozTkANn" 00:11:15.002 } 00:11:15.002 }, 00:11:15.002 { 00:11:15.002 "method": "nvmf_subsystem_add_ns", 00:11:15.002 "params": { 00:11:15.002 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:15.002 "namespace": { 00:11:15.002 "nsid": 1, 00:11:15.002 "bdev_name": "malloc0", 00:11:15.002 "nguid": "8DFD452FBB8448FD93AF780EA6985F98", 00:11:15.002 "uuid": "8dfd452f-bb84-48fd-93af-780ea6985f98", 00:11:15.002 "no_auto_visible": false 00:11:15.002 } 00:11:15.002 } 00:11:15.002 }, 00:11:15.002 { 00:11:15.002 "method": "nvmf_subsystem_add_listener", 00:11:15.002 "params": { 00:11:15.002 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:15.002 "listen_address": { 00:11:15.002 "trtype": "TCP", 00:11:15.002 "adrfam": "IPv4", 00:11:15.002 "traddr": "10.0.0.2", 00:11:15.002 "trsvcid": "4420" 00:11:15.002 }, 00:11:15.002 "secure_channel": true 00:11:15.002 } 00:11:15.002 } 00:11:15.002 ] 00:11:15.002 } 00:11:15.002 ] 00:11:15.002 }' 00:11:15.002 15:34:16 -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:11:15.260 15:34:16 -- target/tls.sh@197 -- # bdevperfconf='{ 00:11:15.260 "subsystems": [ 00:11:15.260 { 00:11:15.260 "subsystem": "keyring", 00:11:15.260 "config": [] 00:11:15.260 }, 00:11:15.260 { 00:11:15.260 "subsystem": "iobuf", 00:11:15.260 "config": [ 00:11:15.260 { 00:11:15.260 "method": "iobuf_set_options", 00:11:15.260 "params": { 00:11:15.260 "small_pool_count": 8192, 00:11:15.260 "large_pool_count": 1024, 00:11:15.260 "small_bufsize": 8192, 00:11:15.260 "large_bufsize": 135168 00:11:15.260 } 00:11:15.260 } 00:11:15.260 ] 00:11:15.260 }, 00:11:15.260 { 00:11:15.260 "subsystem": "sock", 00:11:15.260 "config": [ 00:11:15.260 { 00:11:15.260 "method": "sock_impl_set_options", 00:11:15.260 "params": { 00:11:15.260 "impl_name": "uring", 00:11:15.260 "recv_buf_size": 2097152, 00:11:15.260 "send_buf_size": 2097152, 00:11:15.260 "enable_recv_pipe": true, 00:11:15.260 "enable_quickack": false, 00:11:15.260 "enable_placement_id": 0, 00:11:15.260 "enable_zerocopy_send_server": false, 00:11:15.260 "enable_zerocopy_send_client": false, 00:11:15.260 "zerocopy_threshold": 0, 00:11:15.260 "tls_version": 0, 00:11:15.260 "enable_ktls": false 00:11:15.260 } 00:11:15.260 }, 00:11:15.260 { 00:11:15.260 "method": "sock_impl_set_options", 00:11:15.260 "params": { 00:11:15.260 "impl_name": "posix", 00:11:15.260 "recv_buf_size": 2097152, 00:11:15.260 "send_buf_size": 2097152, 00:11:15.260 "enable_recv_pipe": true, 00:11:15.260 "enable_quickack": false, 00:11:15.260 "enable_placement_id": 0, 00:11:15.260 "enable_zerocopy_send_server": true, 00:11:15.260 "enable_zerocopy_send_client": false, 00:11:15.260 "zerocopy_threshold": 0, 00:11:15.260 "tls_version": 0, 00:11:15.260 "enable_ktls": false 00:11:15.260 } 00:11:15.260 }, 00:11:15.260 { 00:11:15.260 "method": "sock_impl_set_options", 00:11:15.260 "params": { 00:11:15.260 "impl_name": "ssl", 00:11:15.260 "recv_buf_size": 4096, 00:11:15.260 "send_buf_size": 4096, 00:11:15.260 "enable_recv_pipe": true, 00:11:15.260 "enable_quickack": false, 00:11:15.260 "enable_placement_id": 0, 00:11:15.260 "enable_zerocopy_send_server": true, 00:11:15.260 "enable_zerocopy_send_client": false, 00:11:15.260 "zerocopy_threshold": 0, 00:11:15.260 "tls_version": 0, 00:11:15.260 "enable_ktls": false 00:11:15.260 } 00:11:15.260 } 00:11:15.260 ] 00:11:15.260 }, 00:11:15.260 { 00:11:15.261 "subsystem": "vmd", 00:11:15.261 "config": [] 00:11:15.261 }, 00:11:15.261 { 00:11:15.261 "subsystem": "accel", 00:11:15.261 "config": [ 00:11:15.261 { 00:11:15.261 "method": "accel_set_options", 00:11:15.261 "params": { 00:11:15.261 "small_cache_size": 128, 00:11:15.261 "large_cache_size": 16, 00:11:15.261 "task_count": 2048, 00:11:15.261 "sequence_count": 2048, 00:11:15.261 "buf_count": 2048 00:11:15.261 } 00:11:15.261 } 00:11:15.261 ] 00:11:15.261 }, 00:11:15.261 { 00:11:15.261 "subsystem": "bdev", 00:11:15.261 "config": [ 00:11:15.261 { 00:11:15.261 "method": "bdev_set_options", 00:11:15.261 "params": { 00:11:15.261 "bdev_io_pool_size": 65535, 00:11:15.261 "bdev_io_cache_size": 256, 00:11:15.261 "bdev_auto_examine": true, 00:11:15.261 "iobuf_small_cache_size": 128, 00:11:15.261 "iobuf_large_cache_size": 16 00:11:15.261 } 00:11:15.261 }, 00:11:15.261 { 00:11:15.261 "method": "bdev_raid_set_options", 00:11:15.261 "params": { 00:11:15.261 "process_window_size_kb": 1024 00:11:15.261 } 00:11:15.261 }, 00:11:15.261 { 00:11:15.261 "method": "bdev_iscsi_set_options", 00:11:15.261 "params": { 00:11:15.261 "timeout_sec": 30 00:11:15.261 } 00:11:15.261 }, 00:11:15.261 { 00:11:15.261 "method": "bdev_nvme_set_options", 00:11:15.261 "params": { 00:11:15.261 "action_on_timeout": "none", 00:11:15.261 "timeout_us": 0, 00:11:15.261 "timeout_admin_us": 0, 00:11:15.261 "keep_alive_timeout_ms": 10000, 00:11:15.261 "arbitration_burst": 0, 00:11:15.261 "low_priority_weight": 0, 00:11:15.261 "medium_priority_weight": 0, 00:11:15.261 "high_priority_weight": 0, 00:11:15.261 "nvme_adminq_poll_period_us": 10000, 00:11:15.261 "nvme_ioq_poll_period_us": 0, 00:11:15.261 "io_queue_requests": 512, 00:11:15.261 "delay_cmd_submit": true, 00:11:15.261 "transport_retry_count": 4, 00:11:15.261 "bdev_retry_count": 3, 00:11:15.261 "transport_ack_timeout": 0, 00:11:15.261 "ctrlr_loss_timeout_sec": 0, 00:11:15.261 "reconnect_delay_sec": 0, 00:11:15.261 "fast_io_fail_timeout_sec": 0, 00:11:15.261 "disable_auto_failback": false, 00:11:15.261 "generate_uuids": false, 00:11:15.261 "transport_tos": 0, 00:11:15.261 "nvme_error_stat": false, 00:11:15.261 "rdma_srq_size": 0, 00:11:15.261 "io_path_stat": false, 00:11:15.261 "allow_accel_sequence": false, 00:11:15.261 "rdma_max_cq_size": 0, 00:11:15.261 "rdma_cm_event_timeout_ms": 0, 00:11:15.261 "dhchap_digests": [ 00:11:15.261 "sha256", 00:11:15.261 "sha384", 00:11:15.261 "sha512" 00:11:15.261 ], 00:11:15.261 "dhchap_dhgroups": [ 00:11:15.261 "null", 00:11:15.261 "ffdhe2048", 00:11:15.261 "ffdhe3072", 00:11:15.261 "ffdhe4096", 00:11:15.261 "ffdhe6144", 00:11:15.261 "ffdhe8192" 00:11:15.261 ] 00:11:15.261 } 00:11:15.261 }, 00:11:15.261 { 00:11:15.261 "method": "bdev_nvme_attach_controller", 00:11:15.261 "params": { 00:11:15.261 "name": "TLSTEST", 00:11:15.261 "trtype": "TCP", 00:11:15.261 "adrfam": "IPv4", 00:11:15.261 "traddr": "10.0.0.2", 00:11:15.261 "trsvcid": "4420", 00:11:15.261 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:15.261 "prchk_reftag": false, 00:11:15.261 "prchk_guard": false, 00:11:15.261 "ctrlr_loss_timeout_sec": 0, 00:11:15.261 "reconnect_delay_sec": 0, 00:11:15.261 "fast_io_fail_timeout_sec": 0, 00:11:15.261 "psk": "/tmp/tmp.9RiozTkANn", 00:11:15.261 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:15.261 "hdgst": false, 00:11:15.261 "ddgst": false 00:11:15.261 } 00:11:15.261 }, 00:11:15.261 { 00:11:15.261 "method": "bdev_nvme_set_hotplug", 00:11:15.261 "params": { 00:11:15.261 "period_us": 100000, 00:11:15.261 "enable": false 00:11:15.261 } 00:11:15.261 }, 00:11:15.261 { 00:11:15.261 "method": "bdev_wait_for_examine" 00:11:15.261 } 00:11:15.261 ] 00:11:15.261 }, 00:11:15.261 { 00:11:15.261 "subsystem": "nbd", 00:11:15.261 "config": [] 00:11:15.261 } 00:11:15.261 ] 00:11:15.261 }' 00:11:15.261 15:34:16 -- target/tls.sh@199 -- # killprocess 70515 00:11:15.261 15:34:16 -- common/autotest_common.sh@936 -- # '[' -z 70515 ']' 00:11:15.261 15:34:16 -- common/autotest_common.sh@940 -- # kill -0 70515 00:11:15.261 15:34:16 -- common/autotest_common.sh@941 -- # uname 00:11:15.261 15:34:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:15.261 15:34:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70515 00:11:15.261 killing process with pid 70515 00:11:15.261 Received shutdown signal, test time was about 10.000000 seconds 00:11:15.261 00:11:15.261 Latency(us) 00:11:15.261 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:15.261 =================================================================================================================== 00:11:15.261 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:15.261 15:34:16 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:15.261 15:34:16 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:15.261 15:34:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70515' 00:11:15.261 15:34:16 -- common/autotest_common.sh@955 -- # kill 70515 00:11:15.261 [2024-04-17 15:34:16.578301] app.c: 930:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:11:15.261 15:34:16 -- common/autotest_common.sh@960 -- # wait 70515 00:11:15.542 15:34:16 -- target/tls.sh@200 -- # killprocess 70466 00:11:15.542 15:34:16 -- common/autotest_common.sh@936 -- # '[' -z 70466 ']' 00:11:15.542 15:34:16 -- common/autotest_common.sh@940 -- # kill -0 70466 00:11:15.542 15:34:16 -- common/autotest_common.sh@941 -- # uname 00:11:15.542 15:34:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:15.542 15:34:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70466 00:11:15.542 killing process with pid 70466 00:11:15.542 15:34:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:15.542 15:34:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:15.542 15:34:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70466' 00:11:15.542 15:34:16 -- common/autotest_common.sh@955 -- # kill 70466 00:11:15.542 [2024-04-17 15:34:16.960604] app.c: 930:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:11:15.542 15:34:16 -- common/autotest_common.sh@960 -- # wait 70466 00:11:16.113 15:34:17 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:11:16.113 15:34:17 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:16.113 15:34:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:16.113 15:34:17 -- target/tls.sh@203 -- # echo '{ 00:11:16.113 "subsystems": [ 00:11:16.113 { 00:11:16.113 "subsystem": "keyring", 00:11:16.113 "config": [] 00:11:16.113 }, 00:11:16.113 { 00:11:16.113 "subsystem": "iobuf", 00:11:16.113 "config": [ 00:11:16.113 { 00:11:16.113 "method": "iobuf_set_options", 00:11:16.113 "params": { 00:11:16.113 "small_pool_count": 8192, 00:11:16.113 "large_pool_count": 1024, 00:11:16.113 "small_bufsize": 8192, 00:11:16.113 "large_bufsize": 135168 00:11:16.113 } 00:11:16.113 } 00:11:16.113 ] 00:11:16.113 }, 00:11:16.113 { 00:11:16.113 "subsystem": "sock", 00:11:16.113 "config": [ 00:11:16.113 { 00:11:16.113 "method": "sock_impl_set_options", 00:11:16.113 "params": { 00:11:16.113 "impl_name": "uring", 00:11:16.113 "recv_buf_size": 2097152, 00:11:16.113 "send_buf_size": 2097152, 00:11:16.113 "enable_recv_pipe": true, 00:11:16.113 "enable_quickack": false, 00:11:16.113 "enable_placement_id": 0, 00:11:16.113 "enable_zerocopy_send_server": false, 00:11:16.113 "enable_zerocopy_send_client": false, 00:11:16.113 "zerocopy_threshold": 0, 00:11:16.113 "tls_version": 0, 00:11:16.113 "enable_ktls": false 00:11:16.113 } 00:11:16.113 }, 00:11:16.113 { 00:11:16.113 "method": "sock_impl_set_options", 00:11:16.113 "params": { 00:11:16.113 "impl_name": "posix", 00:11:16.113 "recv_buf_size": 2097152, 00:11:16.113 "send_buf_size": 2097152, 00:11:16.113 "enable_recv_pipe": true, 00:11:16.113 "enable_quickack": false, 00:11:16.113 "enable_placement_id": 0, 00:11:16.113 "enable_zerocopy_send_server": true, 00:11:16.113 "enable_zerocopy_send_client": false, 00:11:16.114 "zerocopy_threshold": 0, 00:11:16.114 "tls_version": 0, 00:11:16.114 "enable_ktls": false 00:11:16.114 } 00:11:16.114 }, 00:11:16.114 { 00:11:16.114 "method": "sock_impl_set_options", 00:11:16.114 "params": { 00:11:16.114 "impl_name": "ssl", 00:11:16.114 "recv_buf_size": 4096, 00:11:16.114 "send_buf_size": 4096, 00:11:16.114 "enable_recv_pipe": true, 00:11:16.114 "enable_quickack": false, 00:11:16.114 "enable_placement_id": 0, 00:11:16.114 "enable_zerocopy_send_server": true, 00:11:16.114 "enable_zerocopy_send_client": false, 00:11:16.114 "zerocopy_threshold": 0, 00:11:16.114 "tls_version": 0, 00:11:16.114 "enable_ktls": false 00:11:16.114 } 00:11:16.114 } 00:11:16.114 ] 00:11:16.114 }, 00:11:16.114 { 00:11:16.114 "subsystem": "vmd", 00:11:16.114 "config": [] 00:11:16.114 }, 00:11:16.114 { 00:11:16.114 "subsystem": "accel", 00:11:16.114 "config": [ 00:11:16.114 { 00:11:16.114 "method": "accel_set_options", 00:11:16.114 "params": { 00:11:16.114 "small_cache_size": 128, 00:11:16.114 "large_cache_size": 16, 00:11:16.114 "task_count": 2048, 00:11:16.114 "sequence_count": 2048, 00:11:16.114 "buf_count": 2048 00:11:16.114 } 00:11:16.114 } 00:11:16.114 ] 00:11:16.114 }, 00:11:16.114 { 00:11:16.114 "subsystem": "bdev", 00:11:16.114 "config": [ 00:11:16.114 { 00:11:16.114 "method": "bdev_set_options", 00:11:16.114 "params": { 00:11:16.114 "bdev_io_pool_size": 65535, 00:11:16.114 "bdev_io_cache_size": 256, 00:11:16.114 "bdev_auto_examine": true, 00:11:16.114 "iobuf_small_cache_size": 128, 00:11:16.114 "iobuf_large_cache_size": 16 00:11:16.114 } 00:11:16.114 }, 00:11:16.114 { 00:11:16.114 "method": "bdev_raid_set_options", 00:11:16.114 "params": { 00:11:16.114 "process_window_size_kb": 1024 00:11:16.114 } 00:11:16.114 }, 00:11:16.114 { 00:11:16.114 "method": "bdev_iscsi_set_options", 00:11:16.114 "params": { 00:11:16.114 "timeout_sec": 30 00:11:16.114 } 00:11:16.114 }, 00:11:16.114 { 00:11:16.114 "method": "bdev_nvme_set_options", 00:11:16.114 "params": { 00:11:16.114 "action_on_timeout": "none", 00:11:16.114 "timeout_us": 0, 00:11:16.114 "timeout_admin_us": 0, 00:11:16.114 "keep_alive_timeout_ms": 10000, 00:11:16.114 "arbitration_burst": 0, 00:11:16.114 "low_priority_weight": 0, 00:11:16.114 "medium_priority_weight": 0, 00:11:16.114 "high_priority_weight": 0, 00:11:16.114 "nvme_adminq_poll_period_us": 10000, 00:11:16.114 "nvme_ioq_poll_period_us": 0, 00:11:16.114 "io_queue_requests": 0, 00:11:16.114 "delay_cmd_submit": true, 00:11:16.114 "transport_retry_count": 4, 00:11:16.114 "bdev_retry_count": 3, 00:11:16.114 "transport_ack_timeout": 0, 00:11:16.114 "ctrlr_loss_timeout_sec": 0, 00:11:16.114 "reconnect_delay_sec": 0, 00:11:16.114 "fast_io_fail_timeout_sec": 0, 00:11:16.114 "disable_auto_failback": false, 00:11:16.114 "generate_uuids": false, 00:11:16.114 "transport_tos": 0, 00:11:16.114 "nvme_error_stat": false, 00:11:16.114 "rdma_srq_size": 0, 00:11:16.114 "io_path_stat": false, 00:11:16.114 "allow_accel_sequence": false, 00:11:16.114 "rdma_max_cq_size": 0, 00:11:16.114 "rdma_cm_event_timeout_ms": 0, 00:11:16.114 "dhchap_digests": [ 00:11:16.114 "sha256", 00:11:16.114 "sha384", 00:11:16.114 "sha512" 00:11:16.114 ], 00:11:16.114 "dhchap_dhgroups": [ 00:11:16.114 "null", 00:11:16.114 "ffdhe2048", 00:11:16.114 "ffdhe3072", 00:11:16.114 "ffdhe4096", 00:11:16.114 "ffdhe6144", 00:11:16.114 "ffdhe8192" 00:11:16.114 ] 00:11:16.114 } 00:11:16.114 }, 00:11:16.114 { 00:11:16.114 "method": "bdev_nvme_set_hotplug", 00:11:16.114 "params": { 00:11:16.114 "period_us": 100000, 00:11:16.114 "enable": false 00:11:16.114 } 00:11:16.114 }, 00:11:16.114 { 00:11:16.114 "method": "bdev_malloc_create", 00:11:16.114 "params": { 00:11:16.114 "name": "malloc0", 00:11:16.114 "num_blocks": 8192, 00:11:16.114 "block_size": 4096, 00:11:16.114 "physical_block_size": 4096, 00:11:16.114 "uuid": "8dfd452f-bb84-48fd-93af-780ea6985f98", 00:11:16.114 "optimal_io_boundary": 0 00:11:16.114 } 00:11:16.114 }, 00:11:16.114 { 00:11:16.114 "method": "bdev_wait_for_examine" 00:11:16.114 } 00:11:16.114 ] 00:11:16.114 }, 00:11:16.114 { 00:11:16.114 "subsystem": "nbd", 00:11:16.114 "config": [] 00:11:16.114 }, 00:11:16.114 { 00:11:16.114 "subsystem": "scheduler", 00:11:16.114 "config": [ 00:11:16.114 { 00:11:16.114 "method": "framework_set_scheduler", 00:11:16.114 "params": { 00:11:16.114 "name": "static" 00:11:16.114 } 00:11:16.114 } 00:11:16.114 ] 00:11:16.114 }, 00:11:16.114 { 00:11:16.114 "subsystem": "nvmf", 00:11:16.114 "config": [ 00:11:16.114 { 00:11:16.114 "method": "nvmf_set_config", 00:11:16.114 "params": { 00:11:16.114 "discovery_filter": "match_any", 00:11:16.114 "admin_cmd_passthru": { 00:11:16.114 "identify_ctrlr": false 00:11:16.114 } 00:11:16.114 } 00:11:16.114 }, 00:11:16.114 { 00:11:16.114 "method": "nvmf_set_max_subsystems", 00:11:16.114 "params": { 00:11:16.114 "max_subsystems": 1024 00:11:16.114 } 00:11:16.114 }, 00:11:16.114 { 00:11:16.114 "method": "nvmf_set_crdt", 00:11:16.114 "params": { 00:11:16.114 "crdt1": 0, 00:11:16.114 "crdt2": 0, 00:11:16.114 "crdt3": 0 00:11:16.114 } 00:11:16.114 }, 00:11:16.114 { 00:11:16.114 "method": "nvmf_create_transport", 00:11:16.114 "params": { 00:11:16.114 "trtype": "TCP", 00:11:16.114 "max_queue_depth": 128, 00:11:16.114 "max_io_qpairs_per_ctrlr": 127, 00:11:16.114 "in_capsule_data_size": 4096, 00:11:16.114 "max_io_size": 131072, 00:11:16.114 "io_unit_size": 131072, 00:11:16.114 "max_aq_depth": 128, 00:11:16.114 "num_shared_buffers": 511, 00:11:16.114 "buf_cache_size": 4294967295, 00:11:16.114 "dif_insert_or_strip": false, 00:11:16.114 "zcopy": false, 00:11:16.114 "c2h_success": false, 00:11:16.114 "sock_priority": 0, 00:11:16.114 "abort_timeout_sec": 1, 00:11:16.114 "ack_timeout": 0 00:11:16.114 } 00:11:16.114 }, 00:11:16.114 { 00:11:16.114 "method": "nvmf_create_subsystem", 00:11:16.114 "params": { 00:11:16.114 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:16.114 "allow_any_host": false, 00:11:16.114 "serial_number": "SPDK00000000000001", 00:11:16.114 "model_number": "SPDK bdev Controller", 00:11:16.114 "max_namespaces": 10, 00:11:16.114 "min_cntlid": 1, 00:11:16.114 "max_cntlid": 65519, 00:11:16.114 "ana_reporting": false 00:11:16.114 } 00:11:16.114 }, 00:11:16.114 { 00:11:16.114 "method": "nvmf_subsystem_add_host", 00:11:16.114 "params": { 00:11:16.114 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:16.114 "host": "nqn.2016-06.io.spdk:host1", 00:11:16.114 "psk": "/tmp/tmp.9RiozTkANn" 00:11:16.114 } 00:11:16.114 }, 00:11:16.114 { 00:11:16.114 "method": "nvmf_subsystem_add_ns", 00:11:16.114 "params": { 00:11:16.114 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:16.114 "namespace": { 00:11:16.114 "nsid": 1, 00:11:16.114 "bdev_name": "malloc0", 00:11:16.114 "nguid": "8DFD452FBB8448FD93AF780EA6985F98", 00:11:16.114 "uuid": "8dfd452f-bb84-48fd-93af-780ea6985f98", 00:11:16.114 "no_auto_visible": false 00:11:16.114 } 00:11:16.114 } 00:11:16.114 }, 00:11:16.114 { 00:11:16.114 "method": "nvmf_subsystem_add_listener", 00:11:16.114 "params": { 00:11:16.115 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:16.115 "listen_address": { 00:11:16.115 "trtype": "TCP", 00:11:16.115 "adrfam": "IPv4", 00:11:16.115 "traddr": "10.0.0.2", 00:11:16.115 "trsvcid": "4420" 00:11:16.115 }, 00:11:16.115 "secure_channel": true 00:11:16.115 } 00:11:16.115 } 00:11:16.115 ] 00:11:16.115 } 00:11:16.115 ] 00:11:16.115 }' 00:11:16.115 15:34:17 -- common/autotest_common.sh@10 -- # set +x 00:11:16.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.115 15:34:17 -- nvmf/common.sh@470 -- # nvmfpid=70569 00:11:16.115 15:34:17 -- nvmf/common.sh@471 -- # waitforlisten 70569 00:11:16.115 15:34:17 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:11:16.115 15:34:17 -- common/autotest_common.sh@817 -- # '[' -z 70569 ']' 00:11:16.115 15:34:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.115 15:34:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:16.115 15:34:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.115 15:34:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:16.115 15:34:17 -- common/autotest_common.sh@10 -- # set +x 00:11:16.115 [2024-04-17 15:34:17.367111] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:11:16.115 [2024-04-17 15:34:17.367461] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:16.115 [2024-04-17 15:34:17.504596] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.373 [2024-04-17 15:34:17.641190] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:16.373 [2024-04-17 15:34:17.641501] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:16.373 [2024-04-17 15:34:17.641669] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:16.373 [2024-04-17 15:34:17.641844] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:16.373 [2024-04-17 15:34:17.641881] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:16.373 [2024-04-17 15:34:17.642003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:16.630 [2024-04-17 15:34:17.898400] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:16.630 [2024-04-17 15:34:17.914337] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:11:16.630 [2024-04-17 15:34:17.930334] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:16.630 [2024-04-17 15:34:17.930577] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:17.196 15:34:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:17.196 15:34:18 -- common/autotest_common.sh@850 -- # return 0 00:11:17.196 15:34:18 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:17.196 15:34:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:17.196 15:34:18 -- common/autotest_common.sh@10 -- # set +x 00:11:17.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:17.196 15:34:18 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:17.196 15:34:18 -- target/tls.sh@207 -- # bdevperf_pid=70601 00:11:17.196 15:34:18 -- target/tls.sh@208 -- # waitforlisten 70601 /var/tmp/bdevperf.sock 00:11:17.196 15:34:18 -- common/autotest_common.sh@817 -- # '[' -z 70601 ']' 00:11:17.196 15:34:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:17.196 15:34:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:17.196 15:34:18 -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:11:17.196 15:34:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:17.196 15:34:18 -- target/tls.sh@204 -- # echo '{ 00:11:17.196 "subsystems": [ 00:11:17.196 { 00:11:17.196 "subsystem": "keyring", 00:11:17.196 "config": [] 00:11:17.196 }, 00:11:17.196 { 00:11:17.196 "subsystem": "iobuf", 00:11:17.196 "config": [ 00:11:17.196 { 00:11:17.196 "method": "iobuf_set_options", 00:11:17.196 "params": { 00:11:17.196 "small_pool_count": 8192, 00:11:17.196 "large_pool_count": 1024, 00:11:17.196 "small_bufsize": 8192, 00:11:17.196 "large_bufsize": 135168 00:11:17.196 } 00:11:17.196 } 00:11:17.196 ] 00:11:17.196 }, 00:11:17.196 { 00:11:17.196 "subsystem": "sock", 00:11:17.196 "config": [ 00:11:17.196 { 00:11:17.196 "method": "sock_impl_set_options", 00:11:17.196 "params": { 00:11:17.196 "impl_name": "uring", 00:11:17.196 "recv_buf_size": 2097152, 00:11:17.196 "send_buf_size": 2097152, 00:11:17.196 "enable_recv_pipe": true, 00:11:17.197 "enable_quickack": false, 00:11:17.197 "enable_placement_id": 0, 00:11:17.197 "enable_zerocopy_send_server": false, 00:11:17.197 "enable_zerocopy_send_client": false, 00:11:17.197 "zerocopy_threshold": 0, 00:11:17.197 "tls_version": 0, 00:11:17.197 "enable_ktls": false 00:11:17.197 } 00:11:17.197 }, 00:11:17.197 { 00:11:17.197 "method": "sock_impl_set_options", 00:11:17.197 "params": { 00:11:17.197 "impl_name": "posix", 00:11:17.197 "recv_buf_size": 2097152, 00:11:17.197 "send_buf_size": 2097152, 00:11:17.197 "enable_recv_pipe": true, 00:11:17.197 "enable_quickack": false, 00:11:17.197 "enable_placement_id": 0, 00:11:17.197 "enable_zerocopy_send_server": true, 00:11:17.197 "enable_zerocopy_send_client": false, 00:11:17.197 "zerocopy_threshold": 0, 00:11:17.197 "tls_version": 0, 00:11:17.197 "enable_ktls": false 00:11:17.197 } 00:11:17.197 }, 00:11:17.197 { 00:11:17.197 "method": "sock_impl_set_options", 00:11:17.197 "params": { 00:11:17.197 "impl_name": "ssl", 00:11:17.197 "recv_buf_size": 4096, 00:11:17.197 "send_buf_size": 4096, 00:11:17.197 "enable_recv_pipe": true, 00:11:17.197 "enable_quickack": false, 00:11:17.197 "enable_placement_id": 0, 00:11:17.197 "enable_zerocopy_send_server": true, 00:11:17.197 "enable_zerocopy_send_client": false, 00:11:17.197 "zerocopy_threshold": 0, 00:11:17.197 "tls_version": 0, 00:11:17.197 "enable_ktls": false 00:11:17.197 } 00:11:17.197 } 00:11:17.197 ] 00:11:17.197 }, 00:11:17.197 { 00:11:17.197 "subsystem": "vmd", 00:11:17.197 "config": [] 00:11:17.197 }, 00:11:17.197 { 00:11:17.197 "subsystem": "accel", 00:11:17.197 "config": [ 00:11:17.197 { 00:11:17.197 "method": "accel_set_options", 00:11:17.197 "params": { 00:11:17.197 "small_cache_size": 128, 00:11:17.197 "large_cache_size": 16, 00:11:17.197 "task_count": 2048, 00:11:17.197 "sequence_count": 2048, 00:11:17.197 "buf_count": 2048 00:11:17.197 } 00:11:17.197 } 00:11:17.197 ] 00:11:17.197 }, 00:11:17.197 { 00:11:17.197 "subsystem": "bdev", 00:11:17.197 "config": [ 00:11:17.197 { 00:11:17.197 "method": "bdev_set_options", 00:11:17.197 "params": { 00:11:17.197 "bdev_io_pool_size": 65535, 00:11:17.197 "bdev_io_cache_size": 256, 00:11:17.197 "bdev_auto_examine": true, 00:11:17.197 "iobuf_small_cache_size": 128, 00:11:17.197 "iobuf_large_cache_size": 16 00:11:17.197 } 00:11:17.197 }, 00:11:17.197 { 00:11:17.197 "method": "bdev_raid_set_options", 00:11:17.197 "params": { 00:11:17.197 "process_window_size_kb": 1024 00:11:17.197 } 00:11:17.197 }, 00:11:17.197 { 00:11:17.197 "method": "bdev_iscsi_set_options", 00:11:17.197 "params": { 00:11:17.197 "timeout_sec": 30 00:11:17.197 } 00:11:17.197 }, 00:11:17.197 { 00:11:17.197 "method": "bdev_nvme_set_options", 00:11:17.197 "params": { 00:11:17.197 "action_on_timeout": "none", 00:11:17.197 "timeout_us": 0, 00:11:17.197 "timeout_admin_us": 0, 00:11:17.197 "keep_alive_timeout_ms": 10000, 00:11:17.197 "arbitration_burst": 0, 00:11:17.197 "low_priority_weight": 0, 00:11:17.197 "medium_priority_weight": 0, 00:11:17.197 "high_priority_weight": 0, 00:11:17.197 "nvme_adminq_poll_period_us": 10000, 00:11:17.197 "nvme_ioq_poll_period_us": 0, 00:11:17.197 "io_queue_requests": 512, 00:11:17.197 "delay_cmd_submit": true, 00:11:17.197 "transport_retry_count": 4, 00:11:17.197 "bdev_retry_count": 3, 00:11:17.197 "transport_ack_timeout": 0, 00:11:17.197 "ctrlr_loss_timeout_sec": 0, 00:11:17.197 "reconnect_delay_sec": 0, 00:11:17.197 "fast_io_fail_timeout_sec": 0, 00:11:17.197 "disable_auto_failback": false, 00:11:17.197 "generate_uuids": false, 00:11:17.197 "transport_tos": 0, 00:11:17.197 "nvme_error_stat": false, 00:11:17.197 "rdma_srq_size": 0, 00:11:17.197 "io_path_stat": false, 00:11:17.197 "allow_accel_sequence": false, 00:11:17.197 "rdma_max_cq_size": 0, 00:11:17.197 "rdma_cm_event_timeout_ms": 0, 00:11:17.197 "dhchap_digests": [ 00:11:17.197 "sha256", 00:11:17.197 "sha384", 00:11:17.197 "sha512" 00:11:17.197 ], 00:11:17.197 "dhchap_dhgroups": [ 00:11:17.197 "null", 00:11:17.197 "ffdhe2048", 00:11:17.197 "ffdhe3072", 00:11:17.197 "ffdhe4096", 00:11:17.197 "ffdhe6144", 00:11:17.197 "ffdhe8192" 00:11:17.197 ] 00:11:17.197 } 00:11:17.197 }, 00:11:17.197 { 00:11:17.197 "method": "bdev_nvme_attach_controller", 00:11:17.197 "params": { 00:11:17.197 "name": "TLSTEST", 00:11:17.197 "trtype": "TCP", 00:11:17.197 "adrfam": "IPv4", 00:11:17.197 "traddr": "10.0.0.2", 00:11:17.197 "trsvcid": "4420", 00:11:17.197 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:17.197 "prchk_reftag": false, 00:11:17.197 "prchk_guard": false, 00:11:17.197 "ctrlr_loss_timeout_sec": 0, 00:11:17.197 "reconnect_delay_sec": 0, 00:11:17.197 "fast_io_fail_timeout_sec": 0, 00:11:17.197 "psk": "/tmp/tmp.9RiozTkANn", 00:11:17.197 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:17.197 "hdgst": false, 00:11:17.197 "ddgst": false 00:11:17.197 } 00:11:17.197 }, 00:11:17.197 { 00:11:17.197 "method": "bdev_nvme_set_hotplug", 00:11:17.197 "params": { 00:11:17.197 "period_us": 100000, 00:11:17.197 "enable": false 00:11:17.197 } 00:11:17.197 }, 00:11:17.197 { 00:11:17.197 "method": "bdev_wait_for_examine" 00:11:17.197 } 00:11:17.197 ] 00:11:17.197 }, 00:11:17.197 { 00:11:17.197 "subsystem": "nbd", 00:11:17.197 "config": [] 00:11:17.197 } 00:11:17.197 ] 00:11:17.197 }' 00:11:17.197 15:34:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:17.197 15:34:18 -- common/autotest_common.sh@10 -- # set +x 00:11:17.197 [2024-04-17 15:34:18.474647] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:11:17.197 [2024-04-17 15:34:18.475499] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70601 ] 00:11:17.197 [2024-04-17 15:34:18.615912] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.456 [2024-04-17 15:34:18.769852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:17.714 [2024-04-17 15:34:18.965235] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:17.714 [2024-04-17 15:34:18.966007] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:11:18.282 15:34:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:18.282 15:34:19 -- common/autotest_common.sh@850 -- # return 0 00:11:18.282 15:34:19 -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:11:18.282 Running I/O for 10 seconds... 00:11:28.260 00:11:28.260 Latency(us) 00:11:28.260 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:28.260 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:11:28.260 Verification LBA range: start 0x0 length 0x2000 00:11:28.260 TLSTESTn1 : 10.02 3627.55 14.17 0.00 0.00 35215.68 7477.06 31695.59 00:11:28.260 =================================================================================================================== 00:11:28.260 Total : 3627.55 14.17 0.00 0.00 35215.68 7477.06 31695.59 00:11:28.260 0 00:11:28.260 15:34:29 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:28.260 15:34:29 -- target/tls.sh@214 -- # killprocess 70601 00:11:28.260 15:34:29 -- common/autotest_common.sh@936 -- # '[' -z 70601 ']' 00:11:28.260 15:34:29 -- common/autotest_common.sh@940 -- # kill -0 70601 00:11:28.260 15:34:29 -- common/autotest_common.sh@941 -- # uname 00:11:28.260 15:34:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:28.260 15:34:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70601 00:11:28.260 killing process with pid 70601 00:11:28.260 Received shutdown signal, test time was about 10.000000 seconds 00:11:28.260 00:11:28.260 Latency(us) 00:11:28.260 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:28.260 =================================================================================================================== 00:11:28.260 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:28.260 15:34:29 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:28.260 15:34:29 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:28.260 15:34:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70601' 00:11:28.260 15:34:29 -- common/autotest_common.sh@955 -- # kill 70601 00:11:28.260 [2024-04-17 15:34:29.667194] app.c: 930:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:11:28.260 15:34:29 -- common/autotest_common.sh@960 -- # wait 70601 00:11:28.827 15:34:29 -- target/tls.sh@215 -- # killprocess 70569 00:11:28.827 15:34:29 -- common/autotest_common.sh@936 -- # '[' -z 70569 ']' 00:11:28.827 15:34:29 -- common/autotest_common.sh@940 -- # kill -0 70569 00:11:28.827 15:34:30 -- common/autotest_common.sh@941 -- # uname 00:11:28.827 15:34:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:28.827 15:34:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70569 00:11:28.827 15:34:30 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:28.827 15:34:30 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:28.827 killing process with pid 70569 00:11:28.827 15:34:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70569' 00:11:28.827 15:34:30 -- common/autotest_common.sh@955 -- # kill 70569 00:11:28.827 [2024-04-17 15:34:30.029940] app.c: 930:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:11:28.827 15:34:30 -- common/autotest_common.sh@960 -- # wait 70569 00:11:29.085 15:34:30 -- target/tls.sh@218 -- # nvmfappstart 00:11:29.085 15:34:30 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:29.085 15:34:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:29.085 15:34:30 -- common/autotest_common.sh@10 -- # set +x 00:11:29.085 15:34:30 -- nvmf/common.sh@470 -- # nvmfpid=70740 00:11:29.085 15:34:30 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:29.085 15:34:30 -- nvmf/common.sh@471 -- # waitforlisten 70740 00:11:29.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.085 15:34:30 -- common/autotest_common.sh@817 -- # '[' -z 70740 ']' 00:11:29.085 15:34:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.085 15:34:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:29.085 15:34:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.085 15:34:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:29.085 15:34:30 -- common/autotest_common.sh@10 -- # set +x 00:11:29.085 [2024-04-17 15:34:30.443877] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:11:29.085 [2024-04-17 15:34:30.444248] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:29.344 [2024-04-17 15:34:30.586686] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.344 [2024-04-17 15:34:30.736839] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:29.344 [2024-04-17 15:34:30.736934] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:29.344 [2024-04-17 15:34:30.736950] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:29.344 [2024-04-17 15:34:30.736961] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:29.344 [2024-04-17 15:34:30.736970] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:29.344 [2024-04-17 15:34:30.737011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.280 15:34:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:30.280 15:34:31 -- common/autotest_common.sh@850 -- # return 0 00:11:30.280 15:34:31 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:30.280 15:34:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:30.280 15:34:31 -- common/autotest_common.sh@10 -- # set +x 00:11:30.280 15:34:31 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:30.280 15:34:31 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.9RiozTkANn 00:11:30.280 15:34:31 -- target/tls.sh@49 -- # local key=/tmp/tmp.9RiozTkANn 00:11:30.280 15:34:31 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:11:30.280 [2024-04-17 15:34:31.639408] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:30.280 15:34:31 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:11:30.539 15:34:31 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:11:30.797 [2024-04-17 15:34:32.147505] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:30.797 [2024-04-17 15:34:32.147830] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:30.797 15:34:32 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:11:31.056 malloc0 00:11:31.056 15:34:32 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:31.315 15:34:32 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9RiozTkANn 00:11:31.574 [2024-04-17 15:34:32.934976] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:11:31.574 15:34:32 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:11:31.574 15:34:32 -- target/tls.sh@222 -- # bdevperf_pid=70794 00:11:31.574 15:34:32 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:31.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:31.574 15:34:32 -- target/tls.sh@225 -- # waitforlisten 70794 /var/tmp/bdevperf.sock 00:11:31.574 15:34:32 -- common/autotest_common.sh@817 -- # '[' -z 70794 ']' 00:11:31.574 15:34:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:31.574 15:34:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:31.574 15:34:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:31.574 15:34:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:31.574 15:34:32 -- common/autotest_common.sh@10 -- # set +x 00:11:31.574 [2024-04-17 15:34:33.002063] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:11:31.574 [2024-04-17 15:34:33.002486] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70794 ] 00:11:31.832 [2024-04-17 15:34:33.141379] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.091 [2024-04-17 15:34:33.296232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:32.657 15:34:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:32.657 15:34:33 -- common/autotest_common.sh@850 -- # return 0 00:11:32.657 15:34:33 -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9RiozTkANn 00:11:32.915 15:34:34 -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:11:33.174 [2024-04-17 15:34:34.450886] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:33.174 nvme0n1 00:11:33.174 15:34:34 -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:33.431 Running I/O for 1 seconds... 00:11:34.367 00:11:34.367 Latency(us) 00:11:34.367 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:34.367 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:34.367 Verification LBA range: start 0x0 length 0x2000 00:11:34.367 nvme0n1 : 1.04 3579.99 13.98 0.00 0.00 35162.21 7536.64 31695.59 00:11:34.367 =================================================================================================================== 00:11:34.367 Total : 3579.99 13.98 0.00 0.00 35162.21 7536.64 31695.59 00:11:34.367 0 00:11:34.367 15:34:35 -- target/tls.sh@234 -- # killprocess 70794 00:11:34.367 15:34:35 -- common/autotest_common.sh@936 -- # '[' -z 70794 ']' 00:11:34.367 15:34:35 -- common/autotest_common.sh@940 -- # kill -0 70794 00:11:34.367 15:34:35 -- common/autotest_common.sh@941 -- # uname 00:11:34.367 15:34:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:34.367 15:34:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70794 00:11:34.367 15:34:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:34.367 15:34:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:34.367 15:34:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70794' 00:11:34.367 killing process with pid 70794 00:11:34.367 15:34:35 -- common/autotest_common.sh@955 -- # kill 70794 00:11:34.367 Received shutdown signal, test time was about 1.000000 seconds 00:11:34.367 00:11:34.367 Latency(us) 00:11:34.367 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:34.367 =================================================================================================================== 00:11:34.367 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:34.367 15:34:35 -- common/autotest_common.sh@960 -- # wait 70794 00:11:34.625 15:34:36 -- target/tls.sh@235 -- # killprocess 70740 00:11:34.625 15:34:36 -- common/autotest_common.sh@936 -- # '[' -z 70740 ']' 00:11:34.625 15:34:36 -- common/autotest_common.sh@940 -- # kill -0 70740 00:11:34.883 15:34:36 -- common/autotest_common.sh@941 -- # uname 00:11:34.883 15:34:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:34.883 15:34:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70740 00:11:34.883 killing process with pid 70740 00:11:34.883 15:34:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:34.883 15:34:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:34.883 15:34:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70740' 00:11:34.883 15:34:36 -- common/autotest_common.sh@955 -- # kill 70740 00:11:34.883 [2024-04-17 15:34:36.094363] app.c: 930:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:11:34.883 15:34:36 -- common/autotest_common.sh@960 -- # wait 70740 00:11:35.142 15:34:36 -- target/tls.sh@238 -- # nvmfappstart 00:11:35.142 15:34:36 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:35.142 15:34:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:35.142 15:34:36 -- common/autotest_common.sh@10 -- # set +x 00:11:35.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.142 15:34:36 -- nvmf/common.sh@470 -- # nvmfpid=70851 00:11:35.142 15:34:36 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:35.142 15:34:36 -- nvmf/common.sh@471 -- # waitforlisten 70851 00:11:35.142 15:34:36 -- common/autotest_common.sh@817 -- # '[' -z 70851 ']' 00:11:35.142 15:34:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.142 15:34:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:35.142 15:34:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.142 15:34:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:35.142 15:34:36 -- common/autotest_common.sh@10 -- # set +x 00:11:35.142 [2024-04-17 15:34:36.503555] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:11:35.142 [2024-04-17 15:34:36.503655] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:35.400 [2024-04-17 15:34:36.636621] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.400 [2024-04-17 15:34:36.777228] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:35.400 [2024-04-17 15:34:36.777288] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:35.400 [2024-04-17 15:34:36.777300] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:35.400 [2024-04-17 15:34:36.777308] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:35.400 [2024-04-17 15:34:36.777316] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:35.400 [2024-04-17 15:34:36.777354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.336 15:34:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:36.336 15:34:37 -- common/autotest_common.sh@850 -- # return 0 00:11:36.336 15:34:37 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:36.336 15:34:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:36.336 15:34:37 -- common/autotest_common.sh@10 -- # set +x 00:11:36.336 15:34:37 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:36.336 15:34:37 -- target/tls.sh@239 -- # rpc_cmd 00:11:36.336 15:34:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:36.336 15:34:37 -- common/autotest_common.sh@10 -- # set +x 00:11:36.336 [2024-04-17 15:34:37.543267] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:36.336 malloc0 00:11:36.336 [2024-04-17 15:34:37.577445] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:36.336 [2024-04-17 15:34:37.577692] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:36.336 15:34:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:36.336 15:34:37 -- target/tls.sh@252 -- # bdevperf_pid=70883 00:11:36.336 15:34:37 -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:11:36.336 15:34:37 -- target/tls.sh@254 -- # waitforlisten 70883 /var/tmp/bdevperf.sock 00:11:36.336 15:34:37 -- common/autotest_common.sh@817 -- # '[' -z 70883 ']' 00:11:36.336 15:34:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:36.336 15:34:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:36.336 15:34:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:36.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:36.336 15:34:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:36.336 15:34:37 -- common/autotest_common.sh@10 -- # set +x 00:11:36.336 [2024-04-17 15:34:37.660747] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:11:36.336 [2024-04-17 15:34:37.661106] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70883 ] 00:11:36.594 [2024-04-17 15:34:37.800633] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.594 [2024-04-17 15:34:37.941793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:37.529 15:34:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:37.529 15:34:38 -- common/autotest_common.sh@850 -- # return 0 00:11:37.529 15:34:38 -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9RiozTkANn 00:11:37.529 15:34:38 -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:11:37.787 [2024-04-17 15:34:39.163063] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:38.046 nvme0n1 00:11:38.046 15:34:39 -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:38.046 Running I/O for 1 seconds... 00:11:39.420 00:11:39.420 Latency(us) 00:11:39.420 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:39.420 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:39.420 Verification LBA range: start 0x0 length 0x2000 00:11:39.420 nvme0n1 : 1.03 4194.22 16.38 0.00 0.00 30087.43 6911.07 18350.08 00:11:39.420 =================================================================================================================== 00:11:39.420 Total : 4194.22 16.38 0.00 0.00 30087.43 6911.07 18350.08 00:11:39.420 0 00:11:39.420 15:34:40 -- target/tls.sh@263 -- # rpc_cmd save_config 00:11:39.420 15:34:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:39.420 15:34:40 -- common/autotest_common.sh@10 -- # set +x 00:11:39.420 15:34:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:39.420 15:34:40 -- target/tls.sh@263 -- # tgtcfg='{ 00:11:39.420 "subsystems": [ 00:11:39.420 { 00:11:39.420 "subsystem": "keyring", 00:11:39.420 "config": [ 00:11:39.420 { 00:11:39.420 "method": "keyring_file_add_key", 00:11:39.420 "params": { 00:11:39.420 "name": "key0", 00:11:39.420 "path": "/tmp/tmp.9RiozTkANn" 00:11:39.420 } 00:11:39.420 } 00:11:39.420 ] 00:11:39.420 }, 00:11:39.420 { 00:11:39.420 "subsystem": "iobuf", 00:11:39.420 "config": [ 00:11:39.420 { 00:11:39.420 "method": "iobuf_set_options", 00:11:39.420 "params": { 00:11:39.420 "small_pool_count": 8192, 00:11:39.420 "large_pool_count": 1024, 00:11:39.420 "small_bufsize": 8192, 00:11:39.420 "large_bufsize": 135168 00:11:39.420 } 00:11:39.420 } 00:11:39.420 ] 00:11:39.420 }, 00:11:39.420 { 00:11:39.420 "subsystem": "sock", 00:11:39.420 "config": [ 00:11:39.420 { 00:11:39.420 "method": "sock_impl_set_options", 00:11:39.420 "params": { 00:11:39.420 "impl_name": "uring", 00:11:39.420 "recv_buf_size": 2097152, 00:11:39.420 "send_buf_size": 2097152, 00:11:39.420 "enable_recv_pipe": true, 00:11:39.420 "enable_quickack": false, 00:11:39.420 "enable_placement_id": 0, 00:11:39.420 "enable_zerocopy_send_server": false, 00:11:39.420 "enable_zerocopy_send_client": false, 00:11:39.420 "zerocopy_threshold": 0, 00:11:39.420 "tls_version": 0, 00:11:39.420 "enable_ktls": false 00:11:39.420 } 00:11:39.420 }, 00:11:39.420 { 00:11:39.420 "method": "sock_impl_set_options", 00:11:39.420 "params": { 00:11:39.420 "impl_name": "posix", 00:11:39.420 "recv_buf_size": 2097152, 00:11:39.420 "send_buf_size": 2097152, 00:11:39.420 "enable_recv_pipe": true, 00:11:39.420 "enable_quickack": false, 00:11:39.420 "enable_placement_id": 0, 00:11:39.420 "enable_zerocopy_send_server": true, 00:11:39.420 "enable_zerocopy_send_client": false, 00:11:39.420 "zerocopy_threshold": 0, 00:11:39.420 "tls_version": 0, 00:11:39.420 "enable_ktls": false 00:11:39.420 } 00:11:39.420 }, 00:11:39.420 { 00:11:39.420 "method": "sock_impl_set_options", 00:11:39.420 "params": { 00:11:39.420 "impl_name": "ssl", 00:11:39.420 "recv_buf_size": 4096, 00:11:39.420 "send_buf_size": 4096, 00:11:39.420 "enable_recv_pipe": true, 00:11:39.420 "enable_quickack": false, 00:11:39.420 "enable_placement_id": 0, 00:11:39.420 "enable_zerocopy_send_server": true, 00:11:39.420 "enable_zerocopy_send_client": false, 00:11:39.420 "zerocopy_threshold": 0, 00:11:39.420 "tls_version": 0, 00:11:39.420 "enable_ktls": false 00:11:39.420 } 00:11:39.420 } 00:11:39.420 ] 00:11:39.420 }, 00:11:39.420 { 00:11:39.420 "subsystem": "vmd", 00:11:39.420 "config": [] 00:11:39.420 }, 00:11:39.420 { 00:11:39.420 "subsystem": "accel", 00:11:39.420 "config": [ 00:11:39.420 { 00:11:39.420 "method": "accel_set_options", 00:11:39.420 "params": { 00:11:39.420 "small_cache_size": 128, 00:11:39.420 "large_cache_size": 16, 00:11:39.420 "task_count": 2048, 00:11:39.420 "sequence_count": 2048, 00:11:39.420 "buf_count": 2048 00:11:39.420 } 00:11:39.420 } 00:11:39.420 ] 00:11:39.420 }, 00:11:39.420 { 00:11:39.420 "subsystem": "bdev", 00:11:39.420 "config": [ 00:11:39.420 { 00:11:39.420 "method": "bdev_set_options", 00:11:39.420 "params": { 00:11:39.420 "bdev_io_pool_size": 65535, 00:11:39.420 "bdev_io_cache_size": 256, 00:11:39.420 "bdev_auto_examine": true, 00:11:39.420 "iobuf_small_cache_size": 128, 00:11:39.420 "iobuf_large_cache_size": 16 00:11:39.420 } 00:11:39.420 }, 00:11:39.420 { 00:11:39.420 "method": "bdev_raid_set_options", 00:11:39.420 "params": { 00:11:39.420 "process_window_size_kb": 1024 00:11:39.420 } 00:11:39.420 }, 00:11:39.420 { 00:11:39.420 "method": "bdev_iscsi_set_options", 00:11:39.420 "params": { 00:11:39.420 "timeout_sec": 30 00:11:39.420 } 00:11:39.420 }, 00:11:39.420 { 00:11:39.420 "method": "bdev_nvme_set_options", 00:11:39.420 "params": { 00:11:39.420 "action_on_timeout": "none", 00:11:39.420 "timeout_us": 0, 00:11:39.420 "timeout_admin_us": 0, 00:11:39.420 "keep_alive_timeout_ms": 10000, 00:11:39.420 "arbitration_burst": 0, 00:11:39.420 "low_priority_weight": 0, 00:11:39.420 "medium_priority_weight": 0, 00:11:39.420 "high_priority_weight": 0, 00:11:39.420 "nvme_adminq_poll_period_us": 10000, 00:11:39.420 "nvme_ioq_poll_period_us": 0, 00:11:39.420 "io_queue_requests": 0, 00:11:39.420 "delay_cmd_submit": true, 00:11:39.420 "transport_retry_count": 4, 00:11:39.420 "bdev_retry_count": 3, 00:11:39.420 "transport_ack_timeout": 0, 00:11:39.420 "ctrlr_loss_timeout_sec": 0, 00:11:39.420 "reconnect_delay_sec": 0, 00:11:39.420 "fast_io_fail_timeout_sec": 0, 00:11:39.420 "disable_auto_failback": false, 00:11:39.420 "generate_uuids": false, 00:11:39.420 "transport_tos": 0, 00:11:39.420 "nvme_error_stat": false, 00:11:39.420 "rdma_srq_size": 0, 00:11:39.420 "io_path_stat": false, 00:11:39.420 "allow_accel_sequence": false, 00:11:39.420 "rdma_max_cq_size": 0, 00:11:39.420 "rdma_cm_event_timeout_ms": 0, 00:11:39.420 "dhchap_digests": [ 00:11:39.420 "sha256", 00:11:39.420 "sha384", 00:11:39.420 "sha512" 00:11:39.420 ], 00:11:39.420 "dhchap_dhgroups": [ 00:11:39.420 "null", 00:11:39.420 "ffdhe2048", 00:11:39.420 "ffdhe3072", 00:11:39.420 "ffdhe4096", 00:11:39.420 "ffdhe6144", 00:11:39.420 "ffdhe8192" 00:11:39.420 ] 00:11:39.420 } 00:11:39.420 }, 00:11:39.420 { 00:11:39.420 "method": "bdev_nvme_set_hotplug", 00:11:39.420 "params": { 00:11:39.420 "period_us": 100000, 00:11:39.420 "enable": false 00:11:39.420 } 00:11:39.420 }, 00:11:39.420 { 00:11:39.420 "method": "bdev_malloc_create", 00:11:39.420 "params": { 00:11:39.420 "name": "malloc0", 00:11:39.420 "num_blocks": 8192, 00:11:39.420 "block_size": 4096, 00:11:39.420 "physical_block_size": 4096, 00:11:39.420 "uuid": "592eed4b-19d5-47ea-b1b1-1316014aa113", 00:11:39.420 "optimal_io_boundary": 0 00:11:39.420 } 00:11:39.420 }, 00:11:39.420 { 00:11:39.420 "method": "bdev_wait_for_examine" 00:11:39.420 } 00:11:39.420 ] 00:11:39.420 }, 00:11:39.420 { 00:11:39.420 "subsystem": "nbd", 00:11:39.420 "config": [] 00:11:39.420 }, 00:11:39.420 { 00:11:39.420 "subsystem": "scheduler", 00:11:39.420 "config": [ 00:11:39.420 { 00:11:39.420 "method": "framework_set_scheduler", 00:11:39.420 "params": { 00:11:39.420 "name": "static" 00:11:39.421 } 00:11:39.421 } 00:11:39.421 ] 00:11:39.421 }, 00:11:39.421 { 00:11:39.421 "subsystem": "nvmf", 00:11:39.421 "config": [ 00:11:39.421 { 00:11:39.421 "method": "nvmf_set_config", 00:11:39.421 "params": { 00:11:39.421 "discovery_filter": "match_any", 00:11:39.421 "admin_cmd_passthru": { 00:11:39.421 "identify_ctrlr": false 00:11:39.421 } 00:11:39.421 } 00:11:39.421 }, 00:11:39.421 { 00:11:39.421 "method": "nvmf_set_max_subsystems", 00:11:39.421 "params": { 00:11:39.421 "max_subsystems": 1024 00:11:39.421 } 00:11:39.421 }, 00:11:39.421 { 00:11:39.421 "method": "nvmf_set_crdt", 00:11:39.421 "params": { 00:11:39.421 "crdt1": 0, 00:11:39.421 "crdt2": 0, 00:11:39.421 "crdt3": 0 00:11:39.421 } 00:11:39.421 }, 00:11:39.421 { 00:11:39.421 "method": "nvmf_create_transport", 00:11:39.421 "params": { 00:11:39.421 "trtype": "TCP", 00:11:39.421 "max_queue_depth": 128, 00:11:39.421 "max_io_qpairs_per_ctrlr": 127, 00:11:39.421 "in_capsule_data_size": 4096, 00:11:39.421 "max_io_size": 131072, 00:11:39.421 "io_unit_size": 131072, 00:11:39.421 "max_aq_depth": 128, 00:11:39.421 "num_shared_buffers": 511, 00:11:39.421 "buf_cache_size": 4294967295, 00:11:39.421 "dif_insert_or_strip": false, 00:11:39.421 "zcopy": false, 00:11:39.421 "c2h_success": false, 00:11:39.421 "sock_priority": 0, 00:11:39.421 "abort_timeout_sec": 1, 00:11:39.421 "ack_timeout": 0 00:11:39.421 } 00:11:39.421 }, 00:11:39.421 { 00:11:39.421 "method": "nvmf_create_subsystem", 00:11:39.421 "params": { 00:11:39.421 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:39.421 "allow_any_host": false, 00:11:39.421 "serial_number": "00000000000000000000", 00:11:39.421 "model_number": "SPDK bdev Controller", 00:11:39.421 "max_namespaces": 32, 00:11:39.421 "min_cntlid": 1, 00:11:39.421 "max_cntlid": 65519, 00:11:39.421 "ana_reporting": false 00:11:39.421 } 00:11:39.421 }, 00:11:39.421 { 00:11:39.421 "method": "nvmf_subsystem_add_host", 00:11:39.421 "params": { 00:11:39.421 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:39.421 "host": "nqn.2016-06.io.spdk:host1", 00:11:39.421 "psk": "key0" 00:11:39.421 } 00:11:39.421 }, 00:11:39.421 { 00:11:39.421 "method": "nvmf_subsystem_add_ns", 00:11:39.421 "params": { 00:11:39.421 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:39.421 "namespace": { 00:11:39.421 "nsid": 1, 00:11:39.421 "bdev_name": "malloc0", 00:11:39.421 "nguid": "592EED4B19D547EAB1B11316014AA113", 00:11:39.421 "uuid": "592eed4b-19d5-47ea-b1b1-1316014aa113", 00:11:39.421 "no_auto_visible": false 00:11:39.421 } 00:11:39.421 } 00:11:39.421 }, 00:11:39.421 { 00:11:39.421 "method": "nvmf_subsystem_add_listener", 00:11:39.421 "params": { 00:11:39.421 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:39.421 "listen_address": { 00:11:39.421 "trtype": "TCP", 00:11:39.421 "adrfam": "IPv4", 00:11:39.421 "traddr": "10.0.0.2", 00:11:39.421 "trsvcid": "4420" 00:11:39.421 }, 00:11:39.421 "secure_channel": true 00:11:39.421 } 00:11:39.421 } 00:11:39.421 ] 00:11:39.421 } 00:11:39.421 ] 00:11:39.421 }' 00:11:39.421 15:34:40 -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:11:39.679 15:34:40 -- target/tls.sh@264 -- # bperfcfg='{ 00:11:39.679 "subsystems": [ 00:11:39.679 { 00:11:39.679 "subsystem": "keyring", 00:11:39.679 "config": [ 00:11:39.679 { 00:11:39.679 "method": "keyring_file_add_key", 00:11:39.679 "params": { 00:11:39.679 "name": "key0", 00:11:39.679 "path": "/tmp/tmp.9RiozTkANn" 00:11:39.679 } 00:11:39.679 } 00:11:39.679 ] 00:11:39.679 }, 00:11:39.679 { 00:11:39.679 "subsystem": "iobuf", 00:11:39.679 "config": [ 00:11:39.679 { 00:11:39.679 "method": "iobuf_set_options", 00:11:39.679 "params": { 00:11:39.679 "small_pool_count": 8192, 00:11:39.679 "large_pool_count": 1024, 00:11:39.679 "small_bufsize": 8192, 00:11:39.679 "large_bufsize": 135168 00:11:39.679 } 00:11:39.679 } 00:11:39.679 ] 00:11:39.679 }, 00:11:39.679 { 00:11:39.679 "subsystem": "sock", 00:11:39.679 "config": [ 00:11:39.679 { 00:11:39.679 "method": "sock_impl_set_options", 00:11:39.679 "params": { 00:11:39.679 "impl_name": "uring", 00:11:39.679 "recv_buf_size": 2097152, 00:11:39.679 "send_buf_size": 2097152, 00:11:39.679 "enable_recv_pipe": true, 00:11:39.679 "enable_quickack": false, 00:11:39.679 "enable_placement_id": 0, 00:11:39.679 "enable_zerocopy_send_server": false, 00:11:39.679 "enable_zerocopy_send_client": false, 00:11:39.679 "zerocopy_threshold": 0, 00:11:39.679 "tls_version": 0, 00:11:39.679 "enable_ktls": false 00:11:39.679 } 00:11:39.679 }, 00:11:39.679 { 00:11:39.679 "method": "sock_impl_set_options", 00:11:39.679 "params": { 00:11:39.679 "impl_name": "posix", 00:11:39.679 "recv_buf_size": 2097152, 00:11:39.679 "send_buf_size": 2097152, 00:11:39.679 "enable_recv_pipe": true, 00:11:39.679 "enable_quickack": false, 00:11:39.679 "enable_placement_id": 0, 00:11:39.679 "enable_zerocopy_send_server": true, 00:11:39.679 "enable_zerocopy_send_client": false, 00:11:39.679 "zerocopy_threshold": 0, 00:11:39.679 "tls_version": 0, 00:11:39.679 "enable_ktls": false 00:11:39.679 } 00:11:39.679 }, 00:11:39.679 { 00:11:39.679 "method": "sock_impl_set_options", 00:11:39.679 "params": { 00:11:39.679 "impl_name": "ssl", 00:11:39.679 "recv_buf_size": 4096, 00:11:39.679 "send_buf_size": 4096, 00:11:39.679 "enable_recv_pipe": true, 00:11:39.679 "enable_quickack": false, 00:11:39.679 "enable_placement_id": 0, 00:11:39.679 "enable_zerocopy_send_server": true, 00:11:39.679 "enable_zerocopy_send_client": false, 00:11:39.679 "zerocopy_threshold": 0, 00:11:39.679 "tls_version": 0, 00:11:39.679 "enable_ktls": false 00:11:39.679 } 00:11:39.679 } 00:11:39.679 ] 00:11:39.679 }, 00:11:39.679 { 00:11:39.679 "subsystem": "vmd", 00:11:39.679 "config": [] 00:11:39.679 }, 00:11:39.679 { 00:11:39.679 "subsystem": "accel", 00:11:39.679 "config": [ 00:11:39.679 { 00:11:39.679 "method": "accel_set_options", 00:11:39.679 "params": { 00:11:39.679 "small_cache_size": 128, 00:11:39.679 "large_cache_size": 16, 00:11:39.679 "task_count": 2048, 00:11:39.679 "sequence_count": 2048, 00:11:39.679 "buf_count": 2048 00:11:39.679 } 00:11:39.679 } 00:11:39.679 ] 00:11:39.679 }, 00:11:39.679 { 00:11:39.679 "subsystem": "bdev", 00:11:39.679 "config": [ 00:11:39.679 { 00:11:39.679 "method": "bdev_set_options", 00:11:39.679 "params": { 00:11:39.679 "bdev_io_pool_size": 65535, 00:11:39.679 "bdev_io_cache_size": 256, 00:11:39.679 "bdev_auto_examine": true, 00:11:39.679 "iobuf_small_cache_size": 128, 00:11:39.679 "iobuf_large_cache_size": 16 00:11:39.679 } 00:11:39.679 }, 00:11:39.679 { 00:11:39.679 "method": "bdev_raid_set_options", 00:11:39.679 "params": { 00:11:39.679 "process_window_size_kb": 1024 00:11:39.679 } 00:11:39.679 }, 00:11:39.679 { 00:11:39.679 "method": "bdev_iscsi_set_options", 00:11:39.679 "params": { 00:11:39.679 "timeout_sec": 30 00:11:39.679 } 00:11:39.679 }, 00:11:39.679 { 00:11:39.679 "method": "bdev_nvme_set_options", 00:11:39.679 "params": { 00:11:39.679 "action_on_timeout": "none", 00:11:39.679 "timeout_us": 0, 00:11:39.679 "timeout_admin_us": 0, 00:11:39.679 "keep_alive_timeout_ms": 10000, 00:11:39.679 "arbitration_burst": 0, 00:11:39.679 "low_priority_weight": 0, 00:11:39.679 "medium_priority_weight": 0, 00:11:39.680 "high_priority_weight": 0, 00:11:39.680 "nvme_adminq_poll_period_us": 10000, 00:11:39.680 "nvme_ioq_poll_period_us": 0, 00:11:39.680 "io_queue_requests": 512, 00:11:39.680 "delay_cmd_submit": true, 00:11:39.680 "transport_retry_count": 4, 00:11:39.680 "bdev_retry_count": 3, 00:11:39.680 "transport_ack_timeout": 0, 00:11:39.680 "ctrlr_loss_timeout_sec": 0, 00:11:39.680 "reconnect_delay_sec": 0, 00:11:39.680 "fast_io_fail_timeout_sec": 0, 00:11:39.680 "disable_auto_failback": false, 00:11:39.680 "generate_uuids": false, 00:11:39.680 "transport_tos": 0, 00:11:39.680 "nvme_error_stat": false, 00:11:39.680 "rdma_srq_size": 0, 00:11:39.680 "io_path_stat": false, 00:11:39.680 "allow_accel_sequence": false, 00:11:39.680 "rdma_max_cq_size": 0, 00:11:39.680 "rdma_cm_event_timeout_ms": 0, 00:11:39.680 "dhchap_digests": [ 00:11:39.680 "sha256", 00:11:39.680 "sha384", 00:11:39.680 "sha512" 00:11:39.680 ], 00:11:39.680 "dhchap_dhgroups": [ 00:11:39.680 "null", 00:11:39.680 "ffdhe2048", 00:11:39.680 "ffdhe3072", 00:11:39.680 "ffdhe4096", 00:11:39.680 "ffdhe6144", 00:11:39.680 "ffdhe8192" 00:11:39.680 ] 00:11:39.680 } 00:11:39.680 }, 00:11:39.680 { 00:11:39.680 "method": "bdev_nvme_attach_controller", 00:11:39.680 "params": { 00:11:39.680 "name": "nvme0", 00:11:39.680 "trtype": "TCP", 00:11:39.680 "adrfam": "IPv4", 00:11:39.680 "traddr": "10.0.0.2", 00:11:39.680 "trsvcid": "4420", 00:11:39.680 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:39.680 "prchk_reftag": false, 00:11:39.680 "prchk_guard": false, 00:11:39.680 "ctrlr_loss_timeout_sec": 0, 00:11:39.680 "reconnect_delay_sec": 0, 00:11:39.680 "fast_io_fail_timeout_sec": 0, 00:11:39.680 "psk": "key0", 00:11:39.680 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:39.680 "hdgst": false, 00:11:39.680 "ddgst": false 00:11:39.680 } 00:11:39.680 }, 00:11:39.680 { 00:11:39.680 "method": "bdev_nvme_set_hotplug", 00:11:39.680 "params": { 00:11:39.680 "period_us": 100000, 00:11:39.680 "enable": false 00:11:39.680 } 00:11:39.680 }, 00:11:39.680 { 00:11:39.680 "method": "bdev_enable_histogram", 00:11:39.680 "params": { 00:11:39.680 "name": "nvme0n1", 00:11:39.680 "enable": true 00:11:39.680 } 00:11:39.680 }, 00:11:39.680 { 00:11:39.680 "method": "bdev_wait_for_examine" 00:11:39.680 } 00:11:39.680 ] 00:11:39.680 }, 00:11:39.680 { 00:11:39.680 "subsystem": "nbd", 00:11:39.680 "config": [] 00:11:39.680 } 00:11:39.680 ] 00:11:39.680 }' 00:11:39.680 15:34:40 -- target/tls.sh@266 -- # killprocess 70883 00:11:39.680 15:34:40 -- common/autotest_common.sh@936 -- # '[' -z 70883 ']' 00:11:39.680 15:34:40 -- common/autotest_common.sh@940 -- # kill -0 70883 00:11:39.680 15:34:40 -- common/autotest_common.sh@941 -- # uname 00:11:39.680 15:34:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:39.680 15:34:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70883 00:11:39.680 killing process with pid 70883 00:11:39.680 Received shutdown signal, test time was about 1.000000 seconds 00:11:39.680 00:11:39.680 Latency(us) 00:11:39.680 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:39.680 =================================================================================================================== 00:11:39.680 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:39.680 15:34:40 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:39.680 15:34:40 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:39.680 15:34:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70883' 00:11:39.680 15:34:40 -- common/autotest_common.sh@955 -- # kill 70883 00:11:39.680 15:34:40 -- common/autotest_common.sh@960 -- # wait 70883 00:11:39.938 15:34:41 -- target/tls.sh@267 -- # killprocess 70851 00:11:39.938 15:34:41 -- common/autotest_common.sh@936 -- # '[' -z 70851 ']' 00:11:39.938 15:34:41 -- common/autotest_common.sh@940 -- # kill -0 70851 00:11:39.938 15:34:41 -- common/autotest_common.sh@941 -- # uname 00:11:39.938 15:34:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:39.938 15:34:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70851 00:11:39.938 killing process with pid 70851 00:11:39.938 15:34:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:39.938 15:34:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:39.938 15:34:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70851' 00:11:39.938 15:34:41 -- common/autotest_common.sh@955 -- # kill 70851 00:11:39.938 15:34:41 -- common/autotest_common.sh@960 -- # wait 70851 00:11:40.506 15:34:41 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:11:40.506 15:34:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:40.506 15:34:41 -- target/tls.sh@269 -- # echo '{ 00:11:40.506 "subsystems": [ 00:11:40.506 { 00:11:40.506 "subsystem": "keyring", 00:11:40.506 "config": [ 00:11:40.506 { 00:11:40.506 "method": "keyring_file_add_key", 00:11:40.506 "params": { 00:11:40.506 "name": "key0", 00:11:40.506 "path": "/tmp/tmp.9RiozTkANn" 00:11:40.506 } 00:11:40.506 } 00:11:40.506 ] 00:11:40.506 }, 00:11:40.506 { 00:11:40.506 "subsystem": "iobuf", 00:11:40.506 "config": [ 00:11:40.506 { 00:11:40.506 "method": "iobuf_set_options", 00:11:40.506 "params": { 00:11:40.506 "small_pool_count": 8192, 00:11:40.506 "large_pool_count": 1024, 00:11:40.506 "small_bufsize": 8192, 00:11:40.506 "large_bufsize": 135168 00:11:40.506 } 00:11:40.506 } 00:11:40.506 ] 00:11:40.506 }, 00:11:40.506 { 00:11:40.506 "subsystem": "sock", 00:11:40.506 "config": [ 00:11:40.506 { 00:11:40.506 "method": "sock_impl_set_options", 00:11:40.506 "params": { 00:11:40.506 "impl_name": "uring", 00:11:40.506 "recv_buf_size": 2097152, 00:11:40.506 "send_buf_size": 2097152, 00:11:40.506 "enable_recv_pipe": true, 00:11:40.506 "enable_quickack": false, 00:11:40.506 "enable_placement_id": 0, 00:11:40.506 "enable_zerocopy_send_server": false, 00:11:40.506 "enable_zerocopy_send_client": false, 00:11:40.506 "zerocopy_threshold": 0, 00:11:40.506 "tls_version": 0, 00:11:40.506 "enable_ktls": false 00:11:40.506 } 00:11:40.506 }, 00:11:40.506 { 00:11:40.506 "method": "sock_impl_set_options", 00:11:40.506 "params": { 00:11:40.506 "impl_name": "posix", 00:11:40.506 "recv_buf_size": 2097152, 00:11:40.506 "send_buf_size": 2097152, 00:11:40.506 "enable_recv_pipe": true, 00:11:40.506 "enable_quickack": false, 00:11:40.506 "enable_placement_id": 0, 00:11:40.506 "enable_zerocopy_send_server": true, 00:11:40.506 "enable_zerocopy_send_client": false, 00:11:40.506 "zerocopy_threshold": 0, 00:11:40.506 "tls_version": 0, 00:11:40.506 "enable_ktls": false 00:11:40.506 } 00:11:40.506 }, 00:11:40.506 { 00:11:40.506 "method": "sock_impl_set_options", 00:11:40.506 "params": { 00:11:40.506 "impl_name": "ssl", 00:11:40.506 "recv_buf_size": 4096, 00:11:40.506 "send_buf_size": 4096, 00:11:40.506 "enable_recv_pipe": true, 00:11:40.506 "enable_quickack": false, 00:11:40.506 "enable_placement_id": 0, 00:11:40.506 "enable_zerocopy_send_server": true, 00:11:40.506 "enable_zerocopy_send_client": false, 00:11:40.506 "zerocopy_threshold": 0, 00:11:40.506 "tls_version": 0, 00:11:40.506 "enable_ktls": false 00:11:40.506 } 00:11:40.506 } 00:11:40.506 ] 00:11:40.506 }, 00:11:40.506 { 00:11:40.506 "subsystem": "vmd", 00:11:40.506 "config": [] 00:11:40.506 }, 00:11:40.506 { 00:11:40.506 "subsystem": "accel", 00:11:40.506 "config": [ 00:11:40.506 { 00:11:40.506 "method": "accel_set_options", 00:11:40.506 "params": { 00:11:40.506 "small_cache_size": 128, 00:11:40.506 "large_cache_size": 16, 00:11:40.506 "task_count": 2048, 00:11:40.506 "sequence_count": 2048, 00:11:40.506 "buf_count": 2048 00:11:40.506 } 00:11:40.506 } 00:11:40.506 ] 00:11:40.506 }, 00:11:40.506 { 00:11:40.506 "subsystem": "bdev", 00:11:40.506 "config": [ 00:11:40.506 { 00:11:40.506 "method": "bdev_set_options", 00:11:40.506 "params": { 00:11:40.506 "bdev_io_pool_size": 65535, 00:11:40.506 "bdev_io_cache_size": 256, 00:11:40.506 "bdev_auto_examine": true, 00:11:40.506 "iobuf_small_cache_size": 128, 00:11:40.506 "iobuf_large_cache_size": 16 00:11:40.506 } 00:11:40.506 }, 00:11:40.506 { 00:11:40.506 "method": "bdev_raid_set_options", 00:11:40.506 "params": { 00:11:40.506 "process_window_size_kb": 1024 00:11:40.506 } 00:11:40.506 }, 00:11:40.506 { 00:11:40.506 "method": "bdev_iscsi_set_options", 00:11:40.506 "params": { 00:11:40.506 "timeout_sec": 30 00:11:40.506 } 00:11:40.506 }, 00:11:40.506 { 00:11:40.506 "method": "bdev_nvme_set_options", 00:11:40.506 "params": { 00:11:40.506 "action_on_timeout": "none", 00:11:40.506 "timeout_us": 0, 00:11:40.506 "timeout_admin_us": 0, 00:11:40.506 "keep_alive_timeout_ms": 10000, 00:11:40.506 "arbitration_burst": 0, 00:11:40.506 "low_priority_weight": 0, 00:11:40.506 "medium_priority_weight": 0, 00:11:40.506 "high_priority_weight": 0, 00:11:40.506 "nvme_adminq_poll_period_us": 10000, 00:11:40.506 "nvme_ioq_poll_period_us": 0, 00:11:40.507 "io_queue_requests": 0, 00:11:40.507 "delay_cmd_submit": true, 00:11:40.507 "transport_retry_count": 4, 00:11:40.507 "bdev_retry_count": 3, 00:11:40.507 "transport_ack_timeout": 0, 00:11:40.507 "ctrlr_loss_timeout_sec": 0, 00:11:40.507 "reconnect_delay_sec": 0, 00:11:40.507 "fast_io_fail_timeout_sec": 0, 00:11:40.507 "disable_auto_failback": false, 00:11:40.507 "generate_uuids": false, 00:11:40.507 "transport_tos": 0, 00:11:40.507 "nvme_error_stat": false, 00:11:40.507 "rdma_srq_size": 0, 00:11:40.507 "io_path_stat": false, 00:11:40.507 "allow_accel_sequence": false, 00:11:40.507 "rdma_max_cq_size": 0, 00:11:40.507 "rdma_cm_event_timeout_ms": 0, 00:11:40.507 "dhchap_digests": [ 00:11:40.507 "sha256", 00:11:40.507 "sha384", 00:11:40.507 "sha512" 00:11:40.507 ], 00:11:40.507 "dhchap_dhgroups": [ 00:11:40.507 "null", 00:11:40.507 "ffdhe2048", 00:11:40.507 "ffdhe3072", 00:11:40.507 "ffdhe4096", 00:11:40.507 "ffdhe6144", 00:11:40.507 "ffdhe8192" 00:11:40.507 ] 00:11:40.507 } 00:11:40.507 }, 00:11:40.507 { 00:11:40.507 "method": "bdev_nvme_set_hotplug", 00:11:40.507 "params": { 00:11:40.507 "period_us": 100000, 00:11:40.507 "enable": false 00:11:40.507 } 00:11:40.507 }, 00:11:40.507 { 00:11:40.507 "method": "bdev_malloc_create", 00:11:40.507 "params": { 00:11:40.507 "name": "malloc0", 00:11:40.507 "num_blocks": 8192, 00:11:40.507 "block_size": 4096, 00:11:40.507 "physical_block_size": 4096, 00:11:40.507 "uuid": "592eed4b-19d5-47ea-b1b1-1316014aa113", 00:11:40.507 "optimal_io_boundary": 0 00:11:40.507 } 00:11:40.507 }, 00:11:40.507 { 00:11:40.507 "method": "bdev_wait_for_examine" 00:11:40.507 } 00:11:40.507 ] 00:11:40.507 }, 00:11:40.507 { 00:11:40.507 "subsystem": "nbd", 00:11:40.507 "config": [] 00:11:40.507 }, 00:11:40.507 { 00:11:40.507 "subsystem": "scheduler", 00:11:40.507 "config": [ 00:11:40.507 { 00:11:40.507 "method": "framework_set_scheduler", 00:11:40.507 "params": { 00:11:40.507 "name": "static" 00:11:40.507 } 00:11:40.507 } 00:11:40.507 ] 00:11:40.507 }, 00:11:40.507 { 00:11:40.507 "subsystem": "nvmf", 00:11:40.507 "config": [ 00:11:40.507 { 00:11:40.507 "method": "nvmf_set_config", 00:11:40.507 "params": { 00:11:40.507 "discovery_filter": "match_any", 00:11:40.507 "admin_cmd_passthru": { 00:11:40.507 "identify_ctrlr": false 00:11:40.507 } 00:11:40.507 } 00:11:40.507 }, 00:11:40.507 { 00:11:40.507 "method": "nvmf_set_max_subsystems", 00:11:40.507 "params": { 00:11:40.507 "max_subsystems": 1024 00:11:40.507 } 00:11:40.507 }, 00:11:40.507 { 00:11:40.507 "method": "nvmf_set_crdt", 00:11:40.507 "params": { 00:11:40.507 "crdt1": 0, 00:11:40.507 "crdt2": 0, 00:11:40.507 "crdt3": 0 00:11:40.507 } 00:11:40.507 }, 00:11:40.507 { 00:11:40.507 "method": "nvmf_create_transport", 00:11:40.507 "params": { 00:11:40.507 "trtype": "TCP", 00:11:40.507 "max_queue_depth": 128, 00:11:40.507 "max_io_qpairs_per_ctrlr": 127, 00:11:40.507 "in_capsule_data_size": 4096, 00:11:40.507 "max_io_size": 131072, 00:11:40.507 "io_unit_size": 131072, 00:11:40.507 "max_aq_depth": 128, 00:11:40.507 "num_shared_buffers": 511, 00:11:40.507 "buf_cache_size": 4294967295, 00:11:40.507 "dif_insert_or_strip": false, 00:11:40.507 "zcopy": false, 00:11:40.507 "c2h_success": false, 00:11:40.507 "sock_priority": 0, 00:11:40.507 "abort_timeout_sec": 1, 00:11:40.507 "ack_timeout": 0 00:11:40.507 } 00:11:40.507 }, 00:11:40.507 { 00:11:40.507 "method": "nvmf_create_subsystem", 00:11:40.507 "params": { 00:11:40.507 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:40.507 "allow_any_host": false, 00:11:40.507 "serial_number": "00000000000000000000", 00:11:40.507 "model_number": "SPDK bdev Controller", 00:11:40.507 "max_namespaces": 32, 00:11:40.507 "min_cntlid": 1, 00:11:40.507 "max_cntlid": 65519, 00:11:40.507 "ana_reporting": false 00:11:40.507 } 00:11:40.507 }, 00:11:40.507 { 00:11:40.507 "method": "nvmf_subsystem_add_host", 00:11:40.507 "params": { 00:11:40.507 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:40.507 "host": "nqn.2016-06.io.spdk:host1", 00:11:40.507 "psk": "key0" 00:11:40.507 } 00:11:40.507 }, 00:11:40.507 { 00:11:40.507 "method": "nvmf_subsystem_add_ns", 00:11:40.507 "params": { 00:11:40.507 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:40.507 "namespace": { 00:11:40.507 "nsid": 1, 00:11:40.507 "bdev_name": "malloc0", 00:11:40.507 "nguid": "592EED4B19D547EAB1B11316014AA113", 00:11:40.507 "uuid": "592eed4b-19d5-47ea-b1b1-1316014aa113", 00:11:40.507 "no_auto_visible": false 00:11:40.507 } 00:11:40.507 } 00:11:40.507 }, 00:11:40.507 { 00:11:40.507 "method": "nvmf_subsystem_add_listener", 00:11:40.507 "params": { 00:11:40.507 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:40.507 "listen_address": { 00:11:40.507 "trtype": "TCP", 00:11:40.507 "adrfam": "IPv4", 00:11:40.507 "traddr": "10.0.0.2", 00:11:40.507 "trsvcid": "4420" 00:11:40.507 }, 00:11:40.507 "secure_channel": true 00:11:40.507 } 00:11:40.507 } 00:11:40.507 ] 00:11:40.507 } 00:11:40.507 ] 00:11:40.507 }' 00:11:40.507 15:34:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:40.507 15:34:41 -- common/autotest_common.sh@10 -- # set +x 00:11:40.507 15:34:41 -- nvmf/common.sh@470 -- # nvmfpid=70944 00:11:40.507 15:34:41 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:11:40.507 15:34:41 -- nvmf/common.sh@471 -- # waitforlisten 70944 00:11:40.507 15:34:41 -- common/autotest_common.sh@817 -- # '[' -z 70944 ']' 00:11:40.507 15:34:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.507 15:34:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:40.507 15:34:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.507 15:34:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:40.507 15:34:41 -- common/autotest_common.sh@10 -- # set +x 00:11:40.507 [2024-04-17 15:34:41.710298] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:11:40.507 [2024-04-17 15:34:41.710387] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:40.507 [2024-04-17 15:34:41.847421] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.766 [2024-04-17 15:34:41.957363] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:40.766 [2024-04-17 15:34:41.957425] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:40.766 [2024-04-17 15:34:41.957453] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:40.766 [2024-04-17 15:34:41.957462] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:40.766 [2024-04-17 15:34:41.957470] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:40.766 [2024-04-17 15:34:41.957592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.026 [2024-04-17 15:34:42.222482] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:41.026 [2024-04-17 15:34:42.254460] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:41.026 [2024-04-17 15:34:42.254694] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:41.286 15:34:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:41.286 15:34:42 -- common/autotest_common.sh@850 -- # return 0 00:11:41.286 15:34:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:41.286 15:34:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:41.286 15:34:42 -- common/autotest_common.sh@10 -- # set +x 00:11:41.286 15:34:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:41.286 15:34:42 -- target/tls.sh@272 -- # bdevperf_pid=70976 00:11:41.286 15:34:42 -- target/tls.sh@273 -- # waitforlisten 70976 /var/tmp/bdevperf.sock 00:11:41.286 15:34:42 -- common/autotest_common.sh@817 -- # '[' -z 70976 ']' 00:11:41.286 15:34:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:41.286 15:34:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:41.286 15:34:42 -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:11:41.286 15:34:42 -- target/tls.sh@270 -- # echo '{ 00:11:41.286 "subsystems": [ 00:11:41.286 { 00:11:41.286 "subsystem": "keyring", 00:11:41.286 "config": [ 00:11:41.286 { 00:11:41.286 "method": "keyring_file_add_key", 00:11:41.286 "params": { 00:11:41.286 "name": "key0", 00:11:41.286 "path": "/tmp/tmp.9RiozTkANn" 00:11:41.286 } 00:11:41.286 } 00:11:41.286 ] 00:11:41.286 }, 00:11:41.286 { 00:11:41.286 "subsystem": "iobuf", 00:11:41.286 "config": [ 00:11:41.286 { 00:11:41.286 "method": "iobuf_set_options", 00:11:41.286 "params": { 00:11:41.286 "small_pool_count": 8192, 00:11:41.286 "large_pool_count": 1024, 00:11:41.286 "small_bufsize": 8192, 00:11:41.286 "large_bufsize": 135168 00:11:41.286 } 00:11:41.286 } 00:11:41.286 ] 00:11:41.286 }, 00:11:41.286 { 00:11:41.286 "subsystem": "sock", 00:11:41.286 "config": [ 00:11:41.286 { 00:11:41.286 "method": "sock_impl_set_options", 00:11:41.286 "params": { 00:11:41.286 "impl_name": "uring", 00:11:41.286 "recv_buf_size": 2097152, 00:11:41.286 "send_buf_size": 2097152, 00:11:41.286 "enable_recv_pipe": true, 00:11:41.286 "enable_quickack": false, 00:11:41.286 "enable_placement_id": 0, 00:11:41.286 "enable_zerocopy_send_server": false, 00:11:41.286 "enable_zerocopy_send_client": false, 00:11:41.286 "zerocopy_threshold": 0, 00:11:41.286 "tls_version": 0, 00:11:41.286 "enable_ktls": false 00:11:41.286 } 00:11:41.286 }, 00:11:41.286 { 00:11:41.286 "method": "sock_impl_set_options", 00:11:41.286 "params": { 00:11:41.286 "impl_name": "posix", 00:11:41.286 "recv_buf_size": 2097152, 00:11:41.286 "send_buf_size": 2097152, 00:11:41.286 "enable_recv_pipe": true, 00:11:41.286 "enable_quickack": false, 00:11:41.286 "enable_placement_id": 0, 00:11:41.286 "enable_zerocopy_send_server": true, 00:11:41.286 "enable_zerocopy_send_client": false, 00:11:41.286 "zerocopy_threshold": 0, 00:11:41.286 "tls_version": 0, 00:11:41.286 "enable_ktls": false 00:11:41.286 } 00:11:41.286 }, 00:11:41.286 { 00:11:41.286 "method": "sock_impl_set_options", 00:11:41.286 "params": { 00:11:41.286 "impl_name": "ssl", 00:11:41.286 "recv_buf_size": 4096, 00:11:41.286 "send_buf_size": 4096, 00:11:41.286 "enable_recv_pipe": true, 00:11:41.286 "enable_quickack": false, 00:11:41.286 "enable_placement_id": 0, 00:11:41.286 "enable_zerocopy_send_server": true, 00:11:41.286 "enable_zerocopy_send_client": false, 00:11:41.286 "zerocopy_threshold": 0, 00:11:41.286 "tls_version": 0, 00:11:41.286 "enable_ktls": false 00:11:41.286 } 00:11:41.286 } 00:11:41.286 ] 00:11:41.286 }, 00:11:41.286 { 00:11:41.286 "subsystem": "vmd", 00:11:41.286 "config": [] 00:11:41.286 }, 00:11:41.286 { 00:11:41.286 "subsystem": "accel", 00:11:41.286 "config": [ 00:11:41.286 { 00:11:41.286 "method": "accel_set_options", 00:11:41.286 "params": { 00:11:41.286 "small_cache_size": 128, 00:11:41.286 "large_cache_size": 16, 00:11:41.286 "task_count": 2048, 00:11:41.286 "sequence_count": 2048, 00:11:41.286 "buf_count": 2048 00:11:41.286 } 00:11:41.286 } 00:11:41.286 ] 00:11:41.286 }, 00:11:41.286 { 00:11:41.286 "subsystem": "bdev", 00:11:41.286 "config": [ 00:11:41.286 { 00:11:41.286 "method": "bdev_set_options", 00:11:41.286 "params": { 00:11:41.286 "bdev_io_pool_size": 65535, 00:11:41.286 "bdev_io_cache_size": 256, 00:11:41.286 "bdev_auto_examine": true, 00:11:41.286 "iobuf_small_cache_size": 128, 00:11:41.286 "iobuf_large_cache_size": 16 00:11:41.286 } 00:11:41.286 }, 00:11:41.286 { 00:11:41.286 "method": "bdev_raid_set_options", 00:11:41.286 "params": { 00:11:41.286 "process_window_size_kb": 1024 00:11:41.286 } 00:11:41.286 }, 00:11:41.286 { 00:11:41.286 "method": "bdev_iscsi_set_options", 00:11:41.286 "params": { 00:11:41.286 "timeout_sec": 30 00:11:41.286 } 00:11:41.286 }, 00:11:41.286 { 00:11:41.286 "method": "bdev_nvme_set_options", 00:11:41.286 "params": { 00:11:41.286 "action_on_timeout": "none", 00:11:41.286 "timeout_us": 0, 00:11:41.286 "timeout_admin_us": 0, 00:11:41.286 "keep_alive_timeout_ms": 10000, 00:11:41.286 "arbitration_burst": 0, 00:11:41.286 "low_priority_weight": 0, 00:11:41.286 "medium_priority_weight": 0, 00:11:41.286 "high_priority_weight": 0, 00:11:41.286 "nvme_adminq_poll_period_us": 10000, 00:11:41.286 "nvme_ioq_poll_period_us": 0, 00:11:41.286 "io_queue_requests": 512, 00:11:41.286 "delay_cmd_submit": true, 00:11:41.286 "transport_retry_count": 4, 00:11:41.286 "bdev_retry_count": 3, 00:11:41.286 "transport_ack_timeout": 0, 00:11:41.286 "ctrlr_loss_timeout_sec": 0, 00:11:41.286 "reconnect_delay_sec": 0, 00:11:41.286 "fast_io_fail_timeout_sec": 0, 00:11:41.286 "disable_auto_failback": false, 00:11:41.286 "generate_uuids": false, 00:11:41.286 "transport_tos": 0, 00:11:41.286 "nvme_error_stat": false, 00:11:41.287 "rdma_srq_size": 0, 00:11:41.287 "io_path_stat": false, 00:11:41.287 "allow_accel_sequence": false, 00:11:41.287 "rdma_max_cq_size": 0, 00:11:41.287 "rdma_cm_event_timeout_ms": 0, 00:11:41.287 "dhchap_digests": [ 00:11:41.287 "sha256", 00:11:41.287 "sha384", 00:11:41.287 "sha512" 00:11:41.287 ], 00:11:41.287 "dhchap_dhgroups": [ 00:11:41.287 "null", 00:11:41.287 "ffdhe2048", 00:11:41.287 "ffdhe3072", 00:11:41.287 "ffdhe4096", 00:11:41.287 "ffdhe6144", 00:11:41.287 "ffdhe8192" 00:11:41.287 ] 00:11:41.287 } 00:11:41.287 }, 00:11:41.287 { 00:11:41.287 "method": "bdev_nvme_attach_controller", 00:11:41.287 "params": { 00:11:41.287 "name": "nvme0", 00:11:41.287 "trtype": "TCP", 00:11:41.287 "adrfam": "IPv4", 00:11:41.287 "traddr": "10.0.0.2", 00:11:41.287 "trsvcid": "4420", 00:11:41.287 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:41.287 "prchk_reftag": false, 00:11:41.287 "prchk_guard": false, 00:11:41.287 "ctrlr_loss_timeout_sec": 0, 00:11:41.287 "reconnect_delay_sec": 0, 00:11:41.287 "fast_io_fail_timeout_sec": 0, 00:11:41.287 "psk": "key0", 00:11:41.287 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:41.287 "hdgst": false, 00:11:41.287 "ddgst": false 00:11:41.287 } 00:11:41.287 }, 00:11:41.287 { 00:11:41.287 "method": "bdev_nvme_set_hotplug", 00:11:41.287 "params": { 00:11:41.287 "period_us": 100000, 00:11:41.287 "enable": false 00:11:41.287 } 00:11:41.287 }, 00:11:41.287 { 00:11:41.287 "method": "bdev_enable_histogram", 00:11:41.287 "params": { 00:11:41.287 "name": "nvme0n1", 00:11:41.287 "enable": true 00:11:41.287 } 00:11:41.287 }, 00:11:41.287 { 00:11:41.287 "method": "bdev_wait_for_examine" 00:11:41.287 } 00:11:41.287 ] 00:11:41.287 }, 00:11:41.287 { 00:11:41.287 "subsystem": "nbd", 00:11:41.287 "config": [] 00:11:41.287 } 00:11:41.287 ] 00:11:41.287 }' 00:11:41.287 15:34:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:41.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:41.287 15:34:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:41.287 15:34:42 -- common/autotest_common.sh@10 -- # set +x 00:11:41.546 [2024-04-17 15:34:42.733244] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:11:41.546 [2024-04-17 15:34:42.734168] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70976 ] 00:11:41.546 [2024-04-17 15:34:42.869973] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.804 [2024-04-17 15:34:43.025973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:41.804 [2024-04-17 15:34:43.230090] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:42.370 15:34:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:42.370 15:34:43 -- common/autotest_common.sh@850 -- # return 0 00:11:42.370 15:34:43 -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:11:42.370 15:34:43 -- target/tls.sh@275 -- # jq -r '.[].name' 00:11:42.628 15:34:43 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:42.628 15:34:43 -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:42.628 Running I/O for 1 seconds... 00:11:44.003 00:11:44.003 Latency(us) 00:11:44.003 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:44.003 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:44.003 Verification LBA range: start 0x0 length 0x2000 00:11:44.003 nvme0n1 : 1.03 3994.99 15.61 0.00 0.00 31672.29 11439.01 23235.49 00:11:44.003 =================================================================================================================== 00:11:44.003 Total : 3994.99 15.61 0.00 0.00 31672.29 11439.01 23235.49 00:11:44.003 0 00:11:44.003 15:34:45 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:11:44.003 15:34:45 -- target/tls.sh@279 -- # cleanup 00:11:44.003 15:34:45 -- target/tls.sh@15 -- # process_shm --id 0 00:11:44.003 15:34:45 -- common/autotest_common.sh@794 -- # type=--id 00:11:44.003 15:34:45 -- common/autotest_common.sh@795 -- # id=0 00:11:44.003 15:34:45 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:11:44.003 15:34:45 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:44.003 15:34:45 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:11:44.003 15:34:45 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:11:44.003 15:34:45 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:11:44.003 15:34:45 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:44.003 nvmf_trace.0 00:11:44.003 15:34:45 -- common/autotest_common.sh@809 -- # return 0 00:11:44.003 15:34:45 -- target/tls.sh@16 -- # killprocess 70976 00:11:44.003 15:34:45 -- common/autotest_common.sh@936 -- # '[' -z 70976 ']' 00:11:44.003 15:34:45 -- common/autotest_common.sh@940 -- # kill -0 70976 00:11:44.003 15:34:45 -- common/autotest_common.sh@941 -- # uname 00:11:44.003 15:34:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:44.003 15:34:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70976 00:11:44.003 killing process with pid 70976 00:11:44.003 Received shutdown signal, test time was about 1.000000 seconds 00:11:44.003 00:11:44.003 Latency(us) 00:11:44.003 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:44.003 =================================================================================================================== 00:11:44.003 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:44.003 15:34:45 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:44.003 15:34:45 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:44.003 15:34:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70976' 00:11:44.003 15:34:45 -- common/autotest_common.sh@955 -- # kill 70976 00:11:44.003 15:34:45 -- common/autotest_common.sh@960 -- # wait 70976 00:11:44.261 15:34:45 -- target/tls.sh@17 -- # nvmftestfini 00:11:44.261 15:34:45 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:44.261 15:34:45 -- nvmf/common.sh@117 -- # sync 00:11:44.261 15:34:45 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:44.261 15:34:45 -- nvmf/common.sh@120 -- # set +e 00:11:44.261 15:34:45 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:44.261 15:34:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:44.261 rmmod nvme_tcp 00:11:44.261 rmmod nvme_fabrics 00:11:44.261 rmmod nvme_keyring 00:11:44.261 15:34:45 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:44.261 15:34:45 -- nvmf/common.sh@124 -- # set -e 00:11:44.261 15:34:45 -- nvmf/common.sh@125 -- # return 0 00:11:44.261 15:34:45 -- nvmf/common.sh@478 -- # '[' -n 70944 ']' 00:11:44.261 15:34:45 -- nvmf/common.sh@479 -- # killprocess 70944 00:11:44.261 15:34:45 -- common/autotest_common.sh@936 -- # '[' -z 70944 ']' 00:11:44.261 15:34:45 -- common/autotest_common.sh@940 -- # kill -0 70944 00:11:44.261 15:34:45 -- common/autotest_common.sh@941 -- # uname 00:11:44.261 15:34:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:44.261 15:34:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70944 00:11:44.261 killing process with pid 70944 00:11:44.261 15:34:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:44.261 15:34:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:44.261 15:34:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70944' 00:11:44.261 15:34:45 -- common/autotest_common.sh@955 -- # kill 70944 00:11:44.261 15:34:45 -- common/autotest_common.sh@960 -- # wait 70944 00:11:44.828 15:34:46 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:44.828 15:34:46 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:44.828 15:34:46 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:44.828 15:34:46 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:44.828 15:34:46 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:44.828 15:34:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:44.828 15:34:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:44.828 15:34:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.828 15:34:46 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:44.828 15:34:46 -- target/tls.sh@18 -- # rm -f /tmp/tmp.ytVAlZ6ezA /tmp/tmp.6xo0MWX1dF /tmp/tmp.9RiozTkANn 00:11:44.828 00:11:44.828 real 1m28.095s 00:11:44.828 user 2m17.662s 00:11:44.828 sys 0m28.987s 00:11:44.828 15:34:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:44.828 ************************************ 00:11:44.828 END TEST nvmf_tls 00:11:44.828 ************************************ 00:11:44.828 15:34:46 -- common/autotest_common.sh@10 -- # set +x 00:11:44.828 15:34:46 -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:11:44.828 15:34:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:44.828 15:34:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:44.828 15:34:46 -- common/autotest_common.sh@10 -- # set +x 00:11:44.828 ************************************ 00:11:44.828 START TEST nvmf_fips 00:11:44.828 ************************************ 00:11:44.828 15:34:46 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:11:45.086 * Looking for test storage... 00:11:45.086 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:11:45.086 15:34:46 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:45.086 15:34:46 -- nvmf/common.sh@7 -- # uname -s 00:11:45.086 15:34:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:45.086 15:34:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:45.086 15:34:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:45.086 15:34:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:45.086 15:34:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:45.086 15:34:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:45.086 15:34:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:45.086 15:34:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:45.086 15:34:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:45.086 15:34:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:45.086 15:34:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02dfa913-00e4-4a25-ab2c-855f7283d4db 00:11:45.086 15:34:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=02dfa913-00e4-4a25-ab2c-855f7283d4db 00:11:45.086 15:34:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:45.086 15:34:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:45.086 15:34:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:45.086 15:34:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:45.086 15:34:46 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:45.086 15:34:46 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:45.086 15:34:46 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:45.086 15:34:46 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:45.087 15:34:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.087 15:34:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.087 15:34:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.087 15:34:46 -- paths/export.sh@5 -- # export PATH 00:11:45.087 15:34:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.087 15:34:46 -- nvmf/common.sh@47 -- # : 0 00:11:45.087 15:34:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:45.087 15:34:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:45.087 15:34:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:45.087 15:34:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:45.087 15:34:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:45.087 15:34:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:45.087 15:34:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:45.087 15:34:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:45.087 15:34:46 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:45.087 15:34:46 -- fips/fips.sh@89 -- # check_openssl_version 00:11:45.087 15:34:46 -- fips/fips.sh@83 -- # local target=3.0.0 00:11:45.087 15:34:46 -- fips/fips.sh@85 -- # awk '{print $2}' 00:11:45.087 15:34:46 -- fips/fips.sh@85 -- # openssl version 00:11:45.087 15:34:46 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:11:45.087 15:34:46 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:11:45.087 15:34:46 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:11:45.087 15:34:46 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:11:45.087 15:34:46 -- scripts/common.sh@333 -- # IFS=.-: 00:11:45.087 15:34:46 -- scripts/common.sh@333 -- # read -ra ver1 00:11:45.087 15:34:46 -- scripts/common.sh@334 -- # IFS=.-: 00:11:45.087 15:34:46 -- scripts/common.sh@334 -- # read -ra ver2 00:11:45.087 15:34:46 -- scripts/common.sh@335 -- # local 'op=>=' 00:11:45.087 15:34:46 -- scripts/common.sh@337 -- # ver1_l=3 00:11:45.087 15:34:46 -- scripts/common.sh@338 -- # ver2_l=3 00:11:45.087 15:34:46 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:11:45.087 15:34:46 -- scripts/common.sh@341 -- # case "$op" in 00:11:45.087 15:34:46 -- scripts/common.sh@345 -- # : 1 00:11:45.087 15:34:46 -- scripts/common.sh@361 -- # (( v = 0 )) 00:11:45.087 15:34:46 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:45.087 15:34:46 -- scripts/common.sh@362 -- # decimal 3 00:11:45.087 15:34:46 -- scripts/common.sh@350 -- # local d=3 00:11:45.087 15:34:46 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:11:45.087 15:34:46 -- scripts/common.sh@352 -- # echo 3 00:11:45.087 15:34:46 -- scripts/common.sh@362 -- # ver1[v]=3 00:11:45.087 15:34:46 -- scripts/common.sh@363 -- # decimal 3 00:11:45.087 15:34:46 -- scripts/common.sh@350 -- # local d=3 00:11:45.087 15:34:46 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:11:45.087 15:34:46 -- scripts/common.sh@352 -- # echo 3 00:11:45.087 15:34:46 -- scripts/common.sh@363 -- # ver2[v]=3 00:11:45.087 15:34:46 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:11:45.087 15:34:46 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:11:45.087 15:34:46 -- scripts/common.sh@361 -- # (( v++ )) 00:11:45.087 15:34:46 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:45.087 15:34:46 -- scripts/common.sh@362 -- # decimal 0 00:11:45.087 15:34:46 -- scripts/common.sh@350 -- # local d=0 00:11:45.087 15:34:46 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:11:45.087 15:34:46 -- scripts/common.sh@352 -- # echo 0 00:11:45.087 15:34:46 -- scripts/common.sh@362 -- # ver1[v]=0 00:11:45.087 15:34:46 -- scripts/common.sh@363 -- # decimal 0 00:11:45.087 15:34:46 -- scripts/common.sh@350 -- # local d=0 00:11:45.087 15:34:46 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:11:45.087 15:34:46 -- scripts/common.sh@352 -- # echo 0 00:11:45.087 15:34:46 -- scripts/common.sh@363 -- # ver2[v]=0 00:11:45.087 15:34:46 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:11:45.087 15:34:46 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:11:45.087 15:34:46 -- scripts/common.sh@361 -- # (( v++ )) 00:11:45.087 15:34:46 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:45.087 15:34:46 -- scripts/common.sh@362 -- # decimal 9 00:11:45.087 15:34:46 -- scripts/common.sh@350 -- # local d=9 00:11:45.087 15:34:46 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:11:45.087 15:34:46 -- scripts/common.sh@352 -- # echo 9 00:11:45.087 15:34:46 -- scripts/common.sh@362 -- # ver1[v]=9 00:11:45.087 15:34:46 -- scripts/common.sh@363 -- # decimal 0 00:11:45.087 15:34:46 -- scripts/common.sh@350 -- # local d=0 00:11:45.087 15:34:46 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:11:45.087 15:34:46 -- scripts/common.sh@352 -- # echo 0 00:11:45.087 15:34:46 -- scripts/common.sh@363 -- # ver2[v]=0 00:11:45.087 15:34:46 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:11:45.087 15:34:46 -- scripts/common.sh@364 -- # return 0 00:11:45.087 15:34:46 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:11:45.087 15:34:46 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:11:45.087 15:34:46 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:11:45.087 15:34:46 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:11:45.087 15:34:46 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:11:45.087 15:34:46 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:11:45.087 15:34:46 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:11:45.087 15:34:46 -- fips/fips.sh@113 -- # build_openssl_config 00:11:45.087 15:34:46 -- fips/fips.sh@37 -- # cat 00:11:45.087 15:34:46 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:11:45.087 15:34:46 -- fips/fips.sh@58 -- # cat - 00:11:45.087 15:34:46 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:11:45.087 15:34:46 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:11:45.087 15:34:46 -- fips/fips.sh@116 -- # mapfile -t providers 00:11:45.087 15:34:46 -- fips/fips.sh@116 -- # openssl list -providers 00:11:45.087 15:34:46 -- fips/fips.sh@116 -- # grep name 00:11:45.087 15:34:46 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:11:45.087 15:34:46 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:11:45.087 15:34:46 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:11:45.087 15:34:46 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:11:45.087 15:34:46 -- fips/fips.sh@127 -- # : 00:11:45.087 15:34:46 -- common/autotest_common.sh@638 -- # local es=0 00:11:45.087 15:34:46 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:11:45.087 15:34:46 -- common/autotest_common.sh@626 -- # local arg=openssl 00:11:45.087 15:34:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:45.087 15:34:46 -- common/autotest_common.sh@630 -- # type -t openssl 00:11:45.087 15:34:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:45.087 15:34:46 -- common/autotest_common.sh@632 -- # type -P openssl 00:11:45.087 15:34:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:45.087 15:34:46 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:11:45.087 15:34:46 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:11:45.087 15:34:46 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:11:45.346 Error setting digest 00:11:45.346 00F242793F7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:11:45.346 00F242793F7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:11:45.346 15:34:46 -- common/autotest_common.sh@641 -- # es=1 00:11:45.346 15:34:46 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:45.346 15:34:46 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:45.346 15:34:46 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:45.346 15:34:46 -- fips/fips.sh@130 -- # nvmftestinit 00:11:45.346 15:34:46 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:45.346 15:34:46 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:45.346 15:34:46 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:45.346 15:34:46 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:45.346 15:34:46 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:45.346 15:34:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.346 15:34:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:45.346 15:34:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.346 15:34:46 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:11:45.346 15:34:46 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:11:45.346 15:34:46 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:11:45.346 15:34:46 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:11:45.346 15:34:46 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:11:45.346 15:34:46 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:11:45.346 15:34:46 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:45.346 15:34:46 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:45.346 15:34:46 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:45.346 15:34:46 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:45.346 15:34:46 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:45.346 15:34:46 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:45.346 15:34:46 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:45.346 15:34:46 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:45.346 15:34:46 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:45.346 15:34:46 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:45.346 15:34:46 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:45.346 15:34:46 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:45.346 15:34:46 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:45.346 15:34:46 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:45.346 Cannot find device "nvmf_tgt_br" 00:11:45.346 15:34:46 -- nvmf/common.sh@155 -- # true 00:11:45.346 15:34:46 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:45.346 Cannot find device "nvmf_tgt_br2" 00:11:45.346 15:34:46 -- nvmf/common.sh@156 -- # true 00:11:45.346 15:34:46 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:45.346 15:34:46 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:45.346 Cannot find device "nvmf_tgt_br" 00:11:45.346 15:34:46 -- nvmf/common.sh@158 -- # true 00:11:45.346 15:34:46 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:45.346 Cannot find device "nvmf_tgt_br2" 00:11:45.346 15:34:46 -- nvmf/common.sh@159 -- # true 00:11:45.346 15:34:46 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:45.346 15:34:46 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:45.346 15:34:46 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:45.346 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:45.346 15:34:46 -- nvmf/common.sh@162 -- # true 00:11:45.346 15:34:46 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:45.346 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:45.346 15:34:46 -- nvmf/common.sh@163 -- # true 00:11:45.346 15:34:46 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:45.346 15:34:46 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:45.346 15:34:46 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:45.346 15:34:46 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:45.346 15:34:46 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:45.346 15:34:46 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:45.346 15:34:46 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:45.346 15:34:46 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:45.346 15:34:46 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:45.346 15:34:46 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:45.346 15:34:46 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:45.346 15:34:46 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:45.604 15:34:46 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:45.604 15:34:46 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:45.604 15:34:46 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:45.604 15:34:46 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:45.604 15:34:46 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:45.604 15:34:46 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:45.604 15:34:46 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:45.604 15:34:46 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:45.604 15:34:46 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:45.604 15:34:46 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:45.604 15:34:46 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:45.604 15:34:46 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:45.604 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:45.604 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:11:45.604 00:11:45.604 --- 10.0.0.2 ping statistics --- 00:11:45.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:45.604 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:11:45.604 15:34:46 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:45.604 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:45.604 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:11:45.604 00:11:45.604 --- 10.0.0.3 ping statistics --- 00:11:45.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:45.604 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:11:45.604 15:34:46 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:45.604 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:45.604 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:11:45.604 00:11:45.604 --- 10.0.0.1 ping statistics --- 00:11:45.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:45.604 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:11:45.604 15:34:46 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:45.604 15:34:46 -- nvmf/common.sh@422 -- # return 0 00:11:45.604 15:34:46 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:45.604 15:34:46 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:45.604 15:34:46 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:45.604 15:34:46 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:45.604 15:34:46 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:45.604 15:34:46 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:45.604 15:34:46 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:45.604 15:34:46 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:11:45.604 15:34:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:45.604 15:34:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:45.604 15:34:46 -- common/autotest_common.sh@10 -- # set +x 00:11:45.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:45.604 15:34:46 -- nvmf/common.sh@470 -- # nvmfpid=71248 00:11:45.604 15:34:46 -- nvmf/common.sh@471 -- # waitforlisten 71248 00:11:45.604 15:34:46 -- common/autotest_common.sh@817 -- # '[' -z 71248 ']' 00:11:45.604 15:34:46 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:45.604 15:34:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:45.604 15:34:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:45.604 15:34:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:45.604 15:34:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:45.604 15:34:46 -- common/autotest_common.sh@10 -- # set +x 00:11:45.604 [2024-04-17 15:34:46.998678] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:11:45.604 [2024-04-17 15:34:46.999467] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:45.863 [2024-04-17 15:34:47.141998] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.121 [2024-04-17 15:34:47.304052] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:46.121 [2024-04-17 15:34:47.304132] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:46.121 [2024-04-17 15:34:47.304179] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:46.121 [2024-04-17 15:34:47.304190] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:46.121 [2024-04-17 15:34:47.304200] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:46.121 [2024-04-17 15:34:47.304235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:46.687 15:34:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:46.687 15:34:47 -- common/autotest_common.sh@850 -- # return 0 00:11:46.687 15:34:47 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:46.687 15:34:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:46.687 15:34:47 -- common/autotest_common.sh@10 -- # set +x 00:11:46.687 15:34:47 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:46.687 15:34:47 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:11:46.687 15:34:47 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:11:46.687 15:34:47 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:11:46.687 15:34:47 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:11:46.687 15:34:47 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:11:46.687 15:34:47 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:11:46.687 15:34:47 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:11:46.687 15:34:47 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:46.946 [2024-04-17 15:34:48.172194] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:46.946 [2024-04-17 15:34:48.188137] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:46.946 [2024-04-17 15:34:48.188377] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.946 [2024-04-17 15:34:48.223384] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:11:46.946 malloc0 00:11:46.946 15:34:48 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:46.946 15:34:48 -- fips/fips.sh@147 -- # bdevperf_pid=71282 00:11:46.946 15:34:48 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:46.946 15:34:48 -- fips/fips.sh@148 -- # waitforlisten 71282 /var/tmp/bdevperf.sock 00:11:46.946 15:34:48 -- common/autotest_common.sh@817 -- # '[' -z 71282 ']' 00:11:46.946 15:34:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:46.946 15:34:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:46.946 15:34:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:46.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:46.946 15:34:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:46.946 15:34:48 -- common/autotest_common.sh@10 -- # set +x 00:11:46.946 [2024-04-17 15:34:48.334182] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:11:46.946 [2024-04-17 15:34:48.334304] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71282 ] 00:11:47.204 [2024-04-17 15:34:48.477276] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.204 [2024-04-17 15:34:48.637017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:48.140 15:34:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:48.140 15:34:49 -- common/autotest_common.sh@850 -- # return 0 00:11:48.140 15:34:49 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:11:48.140 [2024-04-17 15:34:49.504039] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:48.140 [2024-04-17 15:34:49.504185] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:11:48.140 TLSTESTn1 00:11:48.399 15:34:49 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:48.399 Running I/O for 10 seconds... 00:11:58.368 00:11:58.368 Latency(us) 00:11:58.368 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:58.368 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:11:58.368 Verification LBA range: start 0x0 length 0x2000 00:11:58.368 TLSTESTn1 : 10.02 3951.49 15.44 0.00 0.00 32328.77 7626.01 34317.03 00:11:58.368 =================================================================================================================== 00:11:58.368 Total : 3951.49 15.44 0.00 0.00 32328.77 7626.01 34317.03 00:11:58.368 0 00:11:58.368 15:34:59 -- fips/fips.sh@1 -- # cleanup 00:11:58.368 15:34:59 -- fips/fips.sh@15 -- # process_shm --id 0 00:11:58.368 15:34:59 -- common/autotest_common.sh@794 -- # type=--id 00:11:58.368 15:34:59 -- common/autotest_common.sh@795 -- # id=0 00:11:58.368 15:34:59 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:11:58.368 15:34:59 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:58.368 15:34:59 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:11:58.368 15:34:59 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:11:58.368 15:34:59 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:11:58.368 15:34:59 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:58.368 nvmf_trace.0 00:11:58.626 15:34:59 -- common/autotest_common.sh@809 -- # return 0 00:11:58.626 15:34:59 -- fips/fips.sh@16 -- # killprocess 71282 00:11:58.626 15:34:59 -- common/autotest_common.sh@936 -- # '[' -z 71282 ']' 00:11:58.626 15:34:59 -- common/autotest_common.sh@940 -- # kill -0 71282 00:11:58.626 15:34:59 -- common/autotest_common.sh@941 -- # uname 00:11:58.626 15:34:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:58.626 15:34:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71282 00:11:58.626 killing process with pid 71282 00:11:58.626 Received shutdown signal, test time was about 10.000000 seconds 00:11:58.626 00:11:58.626 Latency(us) 00:11:58.626 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:58.626 =================================================================================================================== 00:11:58.626 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:58.626 15:34:59 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:58.626 15:34:59 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:58.626 15:34:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71282' 00:11:58.626 15:34:59 -- common/autotest_common.sh@955 -- # kill 71282 00:11:58.626 [2024-04-17 15:34:59.864678] app.c: 930:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:11:58.626 15:34:59 -- common/autotest_common.sh@960 -- # wait 71282 00:11:58.885 15:35:00 -- fips/fips.sh@17 -- # nvmftestfini 00:11:58.885 15:35:00 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:58.885 15:35:00 -- nvmf/common.sh@117 -- # sync 00:11:58.885 15:35:00 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:58.885 15:35:00 -- nvmf/common.sh@120 -- # set +e 00:11:58.885 15:35:00 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:58.885 15:35:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:58.885 rmmod nvme_tcp 00:11:58.885 rmmod nvme_fabrics 00:11:58.885 rmmod nvme_keyring 00:11:58.885 15:35:00 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:58.885 15:35:00 -- nvmf/common.sh@124 -- # set -e 00:11:58.885 15:35:00 -- nvmf/common.sh@125 -- # return 0 00:11:58.885 15:35:00 -- nvmf/common.sh@478 -- # '[' -n 71248 ']' 00:11:58.885 15:35:00 -- nvmf/common.sh@479 -- # killprocess 71248 00:11:58.885 15:35:00 -- common/autotest_common.sh@936 -- # '[' -z 71248 ']' 00:11:58.885 15:35:00 -- common/autotest_common.sh@940 -- # kill -0 71248 00:11:58.885 15:35:00 -- common/autotest_common.sh@941 -- # uname 00:11:58.885 15:35:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:58.885 15:35:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71248 00:11:58.885 killing process with pid 71248 00:11:58.885 15:35:00 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:58.885 15:35:00 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:58.885 15:35:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71248' 00:11:58.885 15:35:00 -- common/autotest_common.sh@955 -- # kill 71248 00:11:58.885 [2024-04-17 15:35:00.302136] app.c: 930:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:11:58.885 15:35:00 -- common/autotest_common.sh@960 -- # wait 71248 00:11:59.451 15:35:00 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:59.451 15:35:00 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:59.451 15:35:00 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:59.451 15:35:00 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:59.451 15:35:00 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:59.451 15:35:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.451 15:35:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:59.451 15:35:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:59.451 15:35:00 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:59.451 15:35:00 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:11:59.451 ************************************ 00:11:59.451 END TEST nvmf_fips 00:11:59.451 ************************************ 00:11:59.451 00:11:59.451 real 0m14.443s 00:11:59.451 user 0m19.647s 00:11:59.451 sys 0m5.796s 00:11:59.451 15:35:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:59.451 15:35:00 -- common/autotest_common.sh@10 -- # set +x 00:11:59.451 15:35:00 -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:11:59.451 15:35:00 -- nvmf/nvmf.sh@70 -- # [[ virt == phy ]] 00:11:59.451 15:35:00 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:11:59.451 15:35:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:59.451 15:35:00 -- common/autotest_common.sh@10 -- # set +x 00:11:59.451 15:35:00 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:11:59.451 15:35:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:59.451 15:35:00 -- common/autotest_common.sh@10 -- # set +x 00:11:59.451 15:35:00 -- nvmf/nvmf.sh@88 -- # [[ 1 -eq 0 ]] 00:11:59.451 15:35:00 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:11:59.451 15:35:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:59.451 15:35:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:59.451 15:35:00 -- common/autotest_common.sh@10 -- # set +x 00:11:59.451 ************************************ 00:11:59.451 START TEST nvmf_identify 00:11:59.451 ************************************ 00:11:59.451 15:35:00 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:11:59.710 * Looking for test storage... 00:11:59.710 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:11:59.710 15:35:00 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:59.710 15:35:00 -- nvmf/common.sh@7 -- # uname -s 00:11:59.710 15:35:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:59.710 15:35:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:59.710 15:35:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:59.710 15:35:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:59.710 15:35:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:59.710 15:35:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:59.710 15:35:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:59.710 15:35:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:59.710 15:35:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:59.710 15:35:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:59.710 15:35:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02dfa913-00e4-4a25-ab2c-855f7283d4db 00:11:59.710 15:35:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=02dfa913-00e4-4a25-ab2c-855f7283d4db 00:11:59.710 15:35:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:59.710 15:35:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:59.710 15:35:00 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:59.710 15:35:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:59.710 15:35:00 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:59.710 15:35:00 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:59.710 15:35:00 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:59.710 15:35:00 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:59.710 15:35:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.710 15:35:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.710 15:35:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.710 15:35:00 -- paths/export.sh@5 -- # export PATH 00:11:59.710 15:35:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.710 15:35:00 -- nvmf/common.sh@47 -- # : 0 00:11:59.710 15:35:00 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:59.710 15:35:00 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:59.710 15:35:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:59.710 15:35:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:59.710 15:35:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:59.710 15:35:00 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:59.710 15:35:00 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:59.710 15:35:00 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:59.710 15:35:00 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:59.710 15:35:00 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:59.710 15:35:00 -- host/identify.sh@14 -- # nvmftestinit 00:11:59.710 15:35:00 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:59.710 15:35:00 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:59.710 15:35:00 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:59.710 15:35:00 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:59.710 15:35:00 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:59.710 15:35:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.710 15:35:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:59.710 15:35:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:59.710 15:35:00 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:11:59.710 15:35:00 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:11:59.710 15:35:00 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:11:59.710 15:35:00 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:11:59.710 15:35:00 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:11:59.710 15:35:00 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:11:59.710 15:35:00 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:59.710 15:35:00 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:59.710 15:35:00 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:59.710 15:35:00 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:59.710 15:35:00 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:59.710 15:35:00 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:59.710 15:35:00 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:59.710 15:35:00 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:59.710 15:35:00 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:59.710 15:35:00 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:59.710 15:35:00 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:59.710 15:35:00 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:59.710 15:35:00 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:59.710 15:35:01 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:59.710 Cannot find device "nvmf_tgt_br" 00:11:59.710 15:35:01 -- nvmf/common.sh@155 -- # true 00:11:59.710 15:35:01 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:59.710 Cannot find device "nvmf_tgt_br2" 00:11:59.710 15:35:01 -- nvmf/common.sh@156 -- # true 00:11:59.710 15:35:01 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:59.710 15:35:01 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:59.710 Cannot find device "nvmf_tgt_br" 00:11:59.710 15:35:01 -- nvmf/common.sh@158 -- # true 00:11:59.710 15:35:01 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:59.710 Cannot find device "nvmf_tgt_br2" 00:11:59.710 15:35:01 -- nvmf/common.sh@159 -- # true 00:11:59.710 15:35:01 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:59.710 15:35:01 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:59.710 15:35:01 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:59.710 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:59.710 15:35:01 -- nvmf/common.sh@162 -- # true 00:11:59.710 15:35:01 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:59.710 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:59.710 15:35:01 -- nvmf/common.sh@163 -- # true 00:11:59.710 15:35:01 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:59.710 15:35:01 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:59.710 15:35:01 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:59.710 15:35:01 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:59.969 15:35:01 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:59.969 15:35:01 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:59.969 15:35:01 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:59.969 15:35:01 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:59.969 15:35:01 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:59.969 15:35:01 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:59.969 15:35:01 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:59.969 15:35:01 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:59.969 15:35:01 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:59.969 15:35:01 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:59.969 15:35:01 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:59.969 15:35:01 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:59.969 15:35:01 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:59.969 15:35:01 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:59.969 15:35:01 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:59.969 15:35:01 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:59.969 15:35:01 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:59.969 15:35:01 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:59.969 15:35:01 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:59.969 15:35:01 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:59.969 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:59.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:11:59.969 00:11:59.969 --- 10.0.0.2 ping statistics --- 00:11:59.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.969 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:11:59.969 15:35:01 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:59.969 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:59.969 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:11:59.969 00:11:59.969 --- 10.0.0.3 ping statistics --- 00:11:59.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.969 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:11:59.969 15:35:01 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:59.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:59.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:11:59.969 00:11:59.969 --- 10.0.0.1 ping statistics --- 00:11:59.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.969 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:11:59.969 15:35:01 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:59.969 15:35:01 -- nvmf/common.sh@422 -- # return 0 00:11:59.969 15:35:01 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:59.969 15:35:01 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:59.969 15:35:01 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:59.969 15:35:01 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:59.969 15:35:01 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:59.969 15:35:01 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:59.969 15:35:01 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:59.969 15:35:01 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:11:59.969 15:35:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:59.969 15:35:01 -- common/autotest_common.sh@10 -- # set +x 00:11:59.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.969 15:35:01 -- host/identify.sh@19 -- # nvmfpid=71636 00:11:59.969 15:35:01 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:59.969 15:35:01 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:59.969 15:35:01 -- host/identify.sh@23 -- # waitforlisten 71636 00:11:59.969 15:35:01 -- common/autotest_common.sh@817 -- # '[' -z 71636 ']' 00:11:59.969 15:35:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.969 15:35:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:59.969 15:35:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.969 15:35:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:59.969 15:35:01 -- common/autotest_common.sh@10 -- # set +x 00:11:59.969 [2024-04-17 15:35:01.401450] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:11:59.969 [2024-04-17 15:35:01.401817] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:00.311 [2024-04-17 15:35:01.543531] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:00.311 [2024-04-17 15:35:01.683158] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:00.311 [2024-04-17 15:35:01.683482] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:00.311 [2024-04-17 15:35:01.683651] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:00.311 [2024-04-17 15:35:01.683719] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:00.311 [2024-04-17 15:35:01.683870] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:00.311 [2024-04-17 15:35:01.684017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:00.311 [2024-04-17 15:35:01.685494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:00.311 [2024-04-17 15:35:01.685668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:00.311 [2024-04-17 15:35:01.685679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.246 15:35:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:01.246 15:35:02 -- common/autotest_common.sh@850 -- # return 0 00:12:01.246 15:35:02 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:01.246 15:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:01.246 15:35:02 -- common/autotest_common.sh@10 -- # set +x 00:12:01.246 [2024-04-17 15:35:02.407702] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:01.246 15:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:01.246 15:35:02 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:12:01.246 15:35:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:01.246 15:35:02 -- common/autotest_common.sh@10 -- # set +x 00:12:01.246 15:35:02 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:01.246 15:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:01.246 15:35:02 -- common/autotest_common.sh@10 -- # set +x 00:12:01.246 Malloc0 00:12:01.246 15:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:01.246 15:35:02 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:01.246 15:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:01.246 15:35:02 -- common/autotest_common.sh@10 -- # set +x 00:12:01.246 15:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:01.246 15:35:02 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:12:01.246 15:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:01.246 15:35:02 -- common/autotest_common.sh@10 -- # set +x 00:12:01.246 15:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:01.246 15:35:02 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:01.246 15:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:01.246 15:35:02 -- common/autotest_common.sh@10 -- # set +x 00:12:01.246 [2024-04-17 15:35:02.520605] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:01.246 15:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:01.246 15:35:02 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:01.246 15:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:01.246 15:35:02 -- common/autotest_common.sh@10 -- # set +x 00:12:01.246 15:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:01.246 15:35:02 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:12:01.246 15:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:01.246 15:35:02 -- common/autotest_common.sh@10 -- # set +x 00:12:01.246 [2024-04-17 15:35:02.536359] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:12:01.246 [ 00:12:01.246 { 00:12:01.246 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:01.246 "subtype": "Discovery", 00:12:01.246 "listen_addresses": [ 00:12:01.246 { 00:12:01.246 "transport": "TCP", 00:12:01.246 "trtype": "TCP", 00:12:01.246 "adrfam": "IPv4", 00:12:01.246 "traddr": "10.0.0.2", 00:12:01.246 "trsvcid": "4420" 00:12:01.246 } 00:12:01.246 ], 00:12:01.246 "allow_any_host": true, 00:12:01.246 "hosts": [] 00:12:01.246 }, 00:12:01.246 { 00:12:01.246 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:01.246 "subtype": "NVMe", 00:12:01.246 "listen_addresses": [ 00:12:01.246 { 00:12:01.246 "transport": "TCP", 00:12:01.246 "trtype": "TCP", 00:12:01.246 "adrfam": "IPv4", 00:12:01.246 "traddr": "10.0.0.2", 00:12:01.246 "trsvcid": "4420" 00:12:01.246 } 00:12:01.246 ], 00:12:01.246 "allow_any_host": true, 00:12:01.246 "hosts": [], 00:12:01.246 "serial_number": "SPDK00000000000001", 00:12:01.246 "model_number": "SPDK bdev Controller", 00:12:01.246 "max_namespaces": 32, 00:12:01.246 "min_cntlid": 1, 00:12:01.246 "max_cntlid": 65519, 00:12:01.246 "namespaces": [ 00:12:01.246 { 00:12:01.246 "nsid": 1, 00:12:01.246 "bdev_name": "Malloc0", 00:12:01.246 "name": "Malloc0", 00:12:01.246 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:12:01.246 "eui64": "ABCDEF0123456789", 00:12:01.246 "uuid": "352db8a6-2b44-4998-b658-90d987ef521d" 00:12:01.246 } 00:12:01.246 ] 00:12:01.246 } 00:12:01.246 ] 00:12:01.246 15:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:01.246 15:35:02 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:12:01.246 [2024-04-17 15:35:02.573457] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:12:01.246 [2024-04-17 15:35:02.573659] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71671 ] 00:12:01.512 [2024-04-17 15:35:02.718160] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:12:01.512 [2024-04-17 15:35:02.718235] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:12:01.512 [2024-04-17 15:35:02.718243] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:12:01.512 [2024-04-17 15:35:02.718256] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:12:01.512 [2024-04-17 15:35:02.718273] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:12:01.512 [2024-04-17 15:35:02.718434] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:12:01.512 [2024-04-17 15:35:02.718489] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x880300 0 00:12:01.512 [2024-04-17 15:35:02.724800] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:12:01.512 [2024-04-17 15:35:02.724827] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:12:01.512 [2024-04-17 15:35:02.724834] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:12:01.512 [2024-04-17 15:35:02.724838] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:12:01.512 [2024-04-17 15:35:02.724887] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.512 [2024-04-17 15:35:02.724894] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.512 [2024-04-17 15:35:02.724899] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x880300) 00:12:01.512 [2024-04-17 15:35:02.724915] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:12:01.512 [2024-04-17 15:35:02.724947] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c89c0, cid 0, qid 0 00:12:01.512 [2024-04-17 15:35:02.732769] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.512 [2024-04-17 15:35:02.732792] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.512 [2024-04-17 15:35:02.732814] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.512 [2024-04-17 15:35:02.732819] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c89c0) on tqpair=0x880300 00:12:01.512 [2024-04-17 15:35:02.732836] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:12:01.512 [2024-04-17 15:35:02.732845] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:12:01.512 [2024-04-17 15:35:02.732852] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:12:01.512 [2024-04-17 15:35:02.732872] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.512 [2024-04-17 15:35:02.732878] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.512 [2024-04-17 15:35:02.732882] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x880300) 00:12:01.512 [2024-04-17 15:35:02.732892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.512 [2024-04-17 15:35:02.732919] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c89c0, cid 0, qid 0 00:12:01.512 [2024-04-17 15:35:02.732985] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.512 [2024-04-17 15:35:02.732992] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.512 [2024-04-17 15:35:02.732996] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.512 [2024-04-17 15:35:02.733000] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c89c0) on tqpair=0x880300 00:12:01.512 [2024-04-17 15:35:02.733011] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:12:01.512 [2024-04-17 15:35:02.733020] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:12:01.512 [2024-04-17 15:35:02.733028] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.512 [2024-04-17 15:35:02.733032] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.512 [2024-04-17 15:35:02.733036] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x880300) 00:12:01.512 [2024-04-17 15:35:02.733044] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.512 [2024-04-17 15:35:02.733063] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c89c0, cid 0, qid 0 00:12:01.512 [2024-04-17 15:35:02.733112] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.512 [2024-04-17 15:35:02.733119] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.512 [2024-04-17 15:35:02.733123] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.512 [2024-04-17 15:35:02.733127] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c89c0) on tqpair=0x880300 00:12:01.512 [2024-04-17 15:35:02.733133] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:12:01.512 [2024-04-17 15:35:02.733142] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:12:01.512 [2024-04-17 15:35:02.733150] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.512 [2024-04-17 15:35:02.733154] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.512 [2024-04-17 15:35:02.733158] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x880300) 00:12:01.512 [2024-04-17 15:35:02.733165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.512 [2024-04-17 15:35:02.733183] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c89c0, cid 0, qid 0 00:12:01.512 [2024-04-17 15:35:02.733234] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.512 [2024-04-17 15:35:02.733240] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.512 [2024-04-17 15:35:02.733244] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.512 [2024-04-17 15:35:02.733248] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c89c0) on tqpair=0x880300 00:12:01.512 [2024-04-17 15:35:02.733254] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:01.512 [2024-04-17 15:35:02.733264] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.512 [2024-04-17 15:35:02.733269] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.512 [2024-04-17 15:35:02.733273] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x880300) 00:12:01.512 [2024-04-17 15:35:02.733280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.512 [2024-04-17 15:35:02.733297] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c89c0, cid 0, qid 0 00:12:01.512 [2024-04-17 15:35:02.733345] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.512 [2024-04-17 15:35:02.733352] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.512 [2024-04-17 15:35:02.733356] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.512 [2024-04-17 15:35:02.733360] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c89c0) on tqpair=0x880300 00:12:01.512 [2024-04-17 15:35:02.733366] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:12:01.512 [2024-04-17 15:35:02.733371] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:12:01.513 [2024-04-17 15:35:02.733379] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:01.513 [2024-04-17 15:35:02.733485] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:12:01.513 [2024-04-17 15:35:02.733491] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:01.513 [2024-04-17 15:35:02.733501] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.513 [2024-04-17 15:35:02.733505] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.513 [2024-04-17 15:35:02.733509] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x880300) 00:12:01.513 [2024-04-17 15:35:02.733517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.513 [2024-04-17 15:35:02.733534] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c89c0, cid 0, qid 0 00:12:01.513 [2024-04-17 15:35:02.733583] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.513 [2024-04-17 15:35:02.733590] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.513 [2024-04-17 15:35:02.733593] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.513 [2024-04-17 15:35:02.733597] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c89c0) on tqpair=0x880300 00:12:01.513 [2024-04-17 15:35:02.733603] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:01.513 [2024-04-17 15:35:02.733613] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.513 [2024-04-17 15:35:02.733618] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.513 [2024-04-17 15:35:02.733622] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x880300) 00:12:01.513 [2024-04-17 15:35:02.733629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.513 [2024-04-17 15:35:02.733646] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c89c0, cid 0, qid 0 00:12:01.513 [2024-04-17 15:35:02.733697] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.513 [2024-04-17 15:35:02.733704] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.513 [2024-04-17 15:35:02.733707] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.513 [2024-04-17 15:35:02.733711] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c89c0) on tqpair=0x880300 00:12:01.513 [2024-04-17 15:35:02.733717] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:01.513 [2024-04-17 15:35:02.733722] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:12:01.513 [2024-04-17 15:35:02.733730] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:12:01.513 [2024-04-17 15:35:02.733741] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:12:01.513 [2024-04-17 15:35:02.733752] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.513 [2024-04-17 15:35:02.733756] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x880300) 00:12:01.513 [2024-04-17 15:35:02.733764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.513 [2024-04-17 15:35:02.733796] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c89c0, cid 0, qid 0 00:12:01.513 [2024-04-17 15:35:02.733902] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:12:01.513 [2024-04-17 15:35:02.733910] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:12:01.513 [2024-04-17 15:35:02.733914] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:12:01.513 [2024-04-17 15:35:02.733918] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x880300): datao=0, datal=4096, cccid=0 00:12:01.513 [2024-04-17 15:35:02.733923] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8c89c0) on tqpair(0x880300): expected_datao=0, payload_size=4096 00:12:01.513 [2024-04-17 15:35:02.733929] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.513 [2024-04-17 15:35:02.733938] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:12:01.513 [2024-04-17 15:35:02.733942] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:12:01.513 [2024-04-17 15:35:02.733951] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.513 [2024-04-17 15:35:02.733957] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.513 [2024-04-17 15:35:02.733962] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.513 [2024-04-17 15:35:02.733966] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c89c0) on tqpair=0x880300 00:12:01.513 [2024-04-17 15:35:02.733976] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:12:01.513 [2024-04-17 15:35:02.733981] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:12:01.513 [2024-04-17 15:35:02.733986] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:12:01.513 [2024-04-17 15:35:02.733996] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:12:01.513 [2024-04-17 15:35:02.734003] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:12:01.513 [2024-04-17 15:35:02.734008] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:12:01.513 [2024-04-17 15:35:02.734018] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:12:01.513 [2024-04-17 15:35:02.734026] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.513 [2024-04-17 15:35:02.734031] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.513 [2024-04-17 15:35:02.734035] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x880300) 00:12:01.513 [2024-04-17 15:35:02.734043] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:01.513 [2024-04-17 15:35:02.734063] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c89c0, cid 0, qid 0 00:12:01.513 [2024-04-17 15:35:02.734121] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.513 [2024-04-17 15:35:02.734128] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.513 [2024-04-17 15:35:02.734132] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.513 [2024-04-17 15:35:02.734136] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c89c0) on tqpair=0x880300 00:12:01.513 [2024-04-17 15:35:02.734145] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.513 [2024-04-17 15:35:02.734149] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.513 [2024-04-17 15:35:02.734153] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x880300) 00:12:01.513 [2024-04-17 15:35:02.734160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.513 [2024-04-17 15:35:02.734167] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.513 [2024-04-17 15:35:02.734171] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.513 [2024-04-17 15:35:02.734174] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x880300) 00:12:01.513 [2024-04-17 15:35:02.734181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.513 [2024-04-17 15:35:02.734187] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.513 [2024-04-17 15:35:02.734191] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.513 [2024-04-17 15:35:02.734195] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x880300) 00:12:01.513 [2024-04-17 15:35:02.734201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.513 [2024-04-17 15:35:02.734207] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.513 [2024-04-17 15:35:02.734211] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.513 [2024-04-17 15:35:02.734215] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x880300) 00:12:01.513 [2024-04-17 15:35:02.734221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.513 [2024-04-17 15:35:02.734226] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:12:01.513 [2024-04-17 15:35:02.734242] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:01.513 [2024-04-17 15:35:02.734250] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.513 [2024-04-17 15:35:02.734254] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x880300) 00:12:01.513 [2024-04-17 15:35:02.734262] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.513 [2024-04-17 15:35:02.734282] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c89c0, cid 0, qid 0 00:12:01.513 [2024-04-17 15:35:02.734289] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c8b20, cid 1, qid 0 00:12:01.513 [2024-04-17 15:35:02.734294] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c8c80, cid 2, qid 0 00:12:01.513 [2024-04-17 15:35:02.734299] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c8de0, cid 3, qid 0 00:12:01.513 [2024-04-17 15:35:02.734304] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c8f40, cid 4, qid 0 00:12:01.513 [2024-04-17 15:35:02.734397] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.513 [2024-04-17 15:35:02.734404] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.513 [2024-04-17 15:35:02.734408] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.513 [2024-04-17 15:35:02.734412] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c8f40) on tqpair=0x880300 00:12:01.513 [2024-04-17 15:35:02.734418] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:12:01.513 [2024-04-17 15:35:02.734424] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:12:01.513 [2024-04-17 15:35:02.734436] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.513 [2024-04-17 15:35:02.734441] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x880300) 00:12:01.513 [2024-04-17 15:35:02.734448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.514 [2024-04-17 15:35:02.734466] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c8f40, cid 4, qid 0 00:12:01.514 [2024-04-17 15:35:02.734528] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:12:01.514 [2024-04-17 15:35:02.734535] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:12:01.514 [2024-04-17 15:35:02.734539] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:12:01.514 [2024-04-17 15:35:02.734555] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x880300): datao=0, datal=4096, cccid=4 00:12:01.514 [2024-04-17 15:35:02.734561] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8c8f40) on tqpair(0x880300): expected_datao=0, payload_size=4096 00:12:01.514 [2024-04-17 15:35:02.734565] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.514 [2024-04-17 15:35:02.734573] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:12:01.514 [2024-04-17 15:35:02.734577] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:12:01.514 [2024-04-17 15:35:02.734586] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.514 [2024-04-17 15:35:02.734592] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.514 [2024-04-17 15:35:02.734596] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.514 [2024-04-17 15:35:02.734600] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c8f40) on tqpair=0x880300 00:12:01.514 [2024-04-17 15:35:02.734614] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:12:01.514 [2024-04-17 15:35:02.734637] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.514 [2024-04-17 15:35:02.734642] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x880300) 00:12:01.514 [2024-04-17 15:35:02.734650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.514 [2024-04-17 15:35:02.734657] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.514 [2024-04-17 15:35:02.734661] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.514 [2024-04-17 15:35:02.734665] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x880300) 00:12:01.514 [2024-04-17 15:35:02.734671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.514 [2024-04-17 15:35:02.734698] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c8f40, cid 4, qid 0 00:12:01.514 [2024-04-17 15:35:02.734706] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c90a0, cid 5, qid 0 00:12:01.514 [2024-04-17 15:35:02.734835] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:12:01.514 [2024-04-17 15:35:02.734844] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:12:01.514 [2024-04-17 15:35:02.734848] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:12:01.514 [2024-04-17 15:35:02.734852] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x880300): datao=0, datal=1024, cccid=4 00:12:01.514 [2024-04-17 15:35:02.734856] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8c8f40) on tqpair(0x880300): expected_datao=0, payload_size=1024 00:12:01.514 [2024-04-17 15:35:02.734861] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.514 [2024-04-17 15:35:02.734868] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:12:01.514 [2024-04-17 15:35:02.734872] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:12:01.514 [2024-04-17 15:35:02.734878] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.514 [2024-04-17 15:35:02.734884] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.514 [2024-04-17 15:35:02.734888] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.514 [2024-04-17 15:35:02.734892] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c90a0) on tqpair=0x880300 00:12:01.514 [2024-04-17 15:35:02.734910] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.514 [2024-04-17 15:35:02.734919] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.514 [2024-04-17 15:35:02.734922] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.514 [2024-04-17 15:35:02.734926] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c8f40) on tqpair=0x880300 00:12:01.514 [2024-04-17 15:35:02.734944] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.514 [2024-04-17 15:35:02.734950] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x880300) 00:12:01.514 [2024-04-17 15:35:02.734957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.514 [2024-04-17 15:35:02.734982] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c8f40, cid 4, qid 0 00:12:01.514 [2024-04-17 15:35:02.735055] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:12:01.514 [2024-04-17 15:35:02.735062] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:12:01.514 [2024-04-17 15:35:02.735066] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:12:01.514 [2024-04-17 15:35:02.735070] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x880300): datao=0, datal=3072, cccid=4 00:12:01.514 [2024-04-17 15:35:02.735075] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8c8f40) on tqpair(0x880300): expected_datao=0, payload_size=3072 00:12:01.514 [2024-04-17 15:35:02.735079] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.514 [2024-04-17 15:35:02.735086] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:12:01.514 [2024-04-17 15:35:02.735090] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:12:01.514 [2024-04-17 15:35:02.735099] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.514 [2024-04-17 15:35:02.735105] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.514 [2024-04-17 15:35:02.735109] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.514 [2024-04-17 15:35:02.735113] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c8f40) on tqpair=0x880300 00:12:01.514 [2024-04-17 15:35:02.735123] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.514 [2024-04-17 15:35:02.735128] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x880300) 00:12:01.514 [2024-04-17 15:35:02.735135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.514 [2024-04-17 15:35:02.735158] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c8f40, cid 4, qid 0 00:12:01.514 [2024-04-17 15:35:02.735223] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:12:01.514 [2024-04-17 15:35:02.735230] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:12:01.514 [2024-04-17 15:35:02.735234] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:12:01.514 [2024-04-17 15:35:02.735238] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x880300): datao=0, datal=8, cccid=4 00:12:01.514 [2024-04-17 15:35:02.735243] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8c8f40) on tqpair(0x880300): expected_datao=0, payload_size=8 00:12:01.514 [2024-04-17 15:35:02.735248] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.514 [2024-04-17 15:35:02.735255] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:12:01.514 [2024-04-17 15:35:02.735259] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:12:01.514 [2024-04-17 15:35:02.735274] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.514 [2024-04-17 15:35:02.735282] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.514 [2024-04-17 15:35:02.735286] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.514 [2024-04-17 15:35:02.735290] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c8f40) on tqpair=0x880300 00:12:01.514 ===================================================== 00:12:01.514 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:12:01.514 ===================================================== 00:12:01.514 Controller Capabilities/Features 00:12:01.514 ================================ 00:12:01.514 Vendor ID: 0000 00:12:01.514 Subsystem Vendor ID: 0000 00:12:01.514 Serial Number: .................... 00:12:01.514 Model Number: ........................................ 00:12:01.514 Firmware Version: 24.05 00:12:01.514 Recommended Arb Burst: 0 00:12:01.514 IEEE OUI Identifier: 00 00 00 00:12:01.514 Multi-path I/O 00:12:01.514 May have multiple subsystem ports: No 00:12:01.514 May have multiple controllers: No 00:12:01.514 Associated with SR-IOV VF: No 00:12:01.514 Max Data Transfer Size: 131072 00:12:01.514 Max Number of Namespaces: 0 00:12:01.514 Max Number of I/O Queues: 1024 00:12:01.514 NVMe Specification Version (VS): 1.3 00:12:01.514 NVMe Specification Version (Identify): 1.3 00:12:01.514 Maximum Queue Entries: 128 00:12:01.514 Contiguous Queues Required: Yes 00:12:01.514 Arbitration Mechanisms Supported 00:12:01.514 Weighted Round Robin: Not Supported 00:12:01.514 Vendor Specific: Not Supported 00:12:01.514 Reset Timeout: 15000 ms 00:12:01.514 Doorbell Stride: 4 bytes 00:12:01.514 NVM Subsystem Reset: Not Supported 00:12:01.514 Command Sets Supported 00:12:01.514 NVM Command Set: Supported 00:12:01.514 Boot Partition: Not Supported 00:12:01.514 Memory Page Size Minimum: 4096 bytes 00:12:01.514 Memory Page Size Maximum: 4096 bytes 00:12:01.514 Persistent Memory Region: Not Supported 00:12:01.515 Optional Asynchronous Events Supported 00:12:01.515 Namespace Attribute Notices: Not Supported 00:12:01.515 Firmware Activation Notices: Not Supported 00:12:01.515 ANA Change Notices: Not Supported 00:12:01.515 PLE Aggregate Log Change Notices: Not Supported 00:12:01.515 LBA Status Info Alert Notices: Not Supported 00:12:01.515 EGE Aggregate Log Change Notices: Not Supported 00:12:01.515 Normal NVM Subsystem Shutdown event: Not Supported 00:12:01.515 Zone Descriptor Change Notices: Not Supported 00:12:01.515 Discovery Log Change Notices: Supported 00:12:01.515 Controller Attributes 00:12:01.515 128-bit Host Identifier: Not Supported 00:12:01.515 Non-Operational Permissive Mode: Not Supported 00:12:01.515 NVM Sets: Not Supported 00:12:01.515 Read Recovery Levels: Not Supported 00:12:01.515 Endurance Groups: Not Supported 00:12:01.515 Predictable Latency Mode: Not Supported 00:12:01.515 Traffic Based Keep ALive: Not Supported 00:12:01.515 Namespace Granularity: Not Supported 00:12:01.515 SQ Associations: Not Supported 00:12:01.515 UUID List: Not Supported 00:12:01.515 Multi-Domain Subsystem: Not Supported 00:12:01.515 Fixed Capacity Management: Not Supported 00:12:01.515 Variable Capacity Management: Not Supported 00:12:01.515 Delete Endurance Group: Not Supported 00:12:01.515 Delete NVM Set: Not Supported 00:12:01.515 Extended LBA Formats Supported: Not Supported 00:12:01.515 Flexible Data Placement Supported: Not Supported 00:12:01.515 00:12:01.515 Controller Memory Buffer Support 00:12:01.515 ================================ 00:12:01.515 Supported: No 00:12:01.515 00:12:01.515 Persistent Memory Region Support 00:12:01.515 ================================ 00:12:01.515 Supported: No 00:12:01.515 00:12:01.515 Admin Command Set Attributes 00:12:01.515 ============================ 00:12:01.515 Security Send/Receive: Not Supported 00:12:01.515 Format NVM: Not Supported 00:12:01.515 Firmware Activate/Download: Not Supported 00:12:01.515 Namespace Management: Not Supported 00:12:01.515 Device Self-Test: Not Supported 00:12:01.515 Directives: Not Supported 00:12:01.515 NVMe-MI: Not Supported 00:12:01.515 Virtualization Management: Not Supported 00:12:01.515 Doorbell Buffer Config: Not Supported 00:12:01.515 Get LBA Status Capability: Not Supported 00:12:01.515 Command & Feature Lockdown Capability: Not Supported 00:12:01.515 Abort Command Limit: 1 00:12:01.515 Async Event Request Limit: 4 00:12:01.515 Number of Firmware Slots: N/A 00:12:01.515 Firmware Slot 1 Read-Only: N/A 00:12:01.515 Firmware Activation Without Reset: N/A 00:12:01.515 Multiple Update Detection Support: N/A 00:12:01.515 Firmware Update Granularity: No Information Provided 00:12:01.515 Per-Namespace SMART Log: No 00:12:01.515 Asymmetric Namespace Access Log Page: Not Supported 00:12:01.515 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:12:01.515 Command Effects Log Page: Not Supported 00:12:01.515 Get Log Page Extended Data: Supported 00:12:01.515 Telemetry Log Pages: Not Supported 00:12:01.515 Persistent Event Log Pages: Not Supported 00:12:01.515 Supported Log Pages Log Page: May Support 00:12:01.515 Commands Supported & Effects Log Page: Not Supported 00:12:01.515 Feature Identifiers & Effects Log Page:May Support 00:12:01.515 NVMe-MI Commands & Effects Log Page: May Support 00:12:01.515 Data Area 4 for Telemetry Log: Not Supported 00:12:01.515 Error Log Page Entries Supported: 128 00:12:01.515 Keep Alive: Not Supported 00:12:01.515 00:12:01.515 NVM Command Set Attributes 00:12:01.515 ========================== 00:12:01.515 Submission Queue Entry Size 00:12:01.515 Max: 1 00:12:01.515 Min: 1 00:12:01.515 Completion Queue Entry Size 00:12:01.515 Max: 1 00:12:01.515 Min: 1 00:12:01.515 Number of Namespaces: 0 00:12:01.515 Compare Command: Not Supported 00:12:01.515 Write Uncorrectable Command: Not Supported 00:12:01.515 Dataset Management Command: Not Supported 00:12:01.515 Write Zeroes Command: Not Supported 00:12:01.515 Set Features Save Field: Not Supported 00:12:01.515 Reservations: Not Supported 00:12:01.515 Timestamp: Not Supported 00:12:01.515 Copy: Not Supported 00:12:01.515 Volatile Write Cache: Not Present 00:12:01.515 Atomic Write Unit (Normal): 1 00:12:01.515 Atomic Write Unit (PFail): 1 00:12:01.515 Atomic Compare & Write Unit: 1 00:12:01.515 Fused Compare & Write: Supported 00:12:01.515 Scatter-Gather List 00:12:01.515 SGL Command Set: Supported 00:12:01.515 SGL Keyed: Supported 00:12:01.515 SGL Bit Bucket Descriptor: Not Supported 00:12:01.515 SGL Metadata Pointer: Not Supported 00:12:01.515 Oversized SGL: Not Supported 00:12:01.515 SGL Metadata Address: Not Supported 00:12:01.515 SGL Offset: Supported 00:12:01.515 Transport SGL Data Block: Not Supported 00:12:01.515 Replay Protected Memory Block: Not Supported 00:12:01.515 00:12:01.515 Firmware Slot Information 00:12:01.515 ========================= 00:12:01.515 Active slot: 0 00:12:01.515 00:12:01.515 00:12:01.515 Error Log 00:12:01.515 ========= 00:12:01.515 00:12:01.515 Active Namespaces 00:12:01.515 ================= 00:12:01.515 Discovery Log Page 00:12:01.515 ================== 00:12:01.515 Generation Counter: 2 00:12:01.515 Number of Records: 2 00:12:01.515 Record Format: 0 00:12:01.515 00:12:01.515 Discovery Log Entry 0 00:12:01.515 ---------------------- 00:12:01.515 Transport Type: 3 (TCP) 00:12:01.515 Address Family: 1 (IPv4) 00:12:01.515 Subsystem Type: 3 (Current Discovery Subsystem) 00:12:01.515 Entry Flags: 00:12:01.515 Duplicate Returned Information: 1 00:12:01.515 Explicit Persistent Connection Support for Discovery: 1 00:12:01.515 Transport Requirements: 00:12:01.515 Secure Channel: Not Required 00:12:01.515 Port ID: 0 (0x0000) 00:12:01.515 Controller ID: 65535 (0xffff) 00:12:01.515 Admin Max SQ Size: 128 00:12:01.515 Transport Service Identifier: 4420 00:12:01.515 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:12:01.515 Transport Address: 10.0.0.2 00:12:01.515 Discovery Log Entry 1 00:12:01.515 ---------------------- 00:12:01.515 Transport Type: 3 (TCP) 00:12:01.515 Address Family: 1 (IPv4) 00:12:01.515 Subsystem Type: 2 (NVM Subsystem) 00:12:01.515 Entry Flags: 00:12:01.515 Duplicate Returned Information: 0 00:12:01.515 Explicit Persistent Connection Support for Discovery: 0 00:12:01.515 Transport Requirements: 00:12:01.515 Secure Channel: Not Required 00:12:01.515 Port ID: 0 (0x0000) 00:12:01.515 Controller ID: 65535 (0xffff) 00:12:01.515 Admin Max SQ Size: 128 00:12:01.515 Transport Service Identifier: 4420 00:12:01.515 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:12:01.515 Transport Address: 10.0.0.2 [2024-04-17 15:35:02.735390] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:12:01.515 [2024-04-17 15:35:02.735406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.516 [2024-04-17 15:35:02.735414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.516 [2024-04-17 15:35:02.735420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.516 [2024-04-17 15:35:02.735427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.516 [2024-04-17 15:35:02.735436] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.516 [2024-04-17 15:35:02.735441] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.516 [2024-04-17 15:35:02.735445] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x880300) 00:12:01.516 [2024-04-17 15:35:02.735453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.516 [2024-04-17 15:35:02.735474] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c8de0, cid 3, qid 0 00:12:01.516 [2024-04-17 15:35:02.735525] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.516 [2024-04-17 15:35:02.735532] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.516 [2024-04-17 15:35:02.735536] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.516 [2024-04-17 15:35:02.735540] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c8de0) on tqpair=0x880300 00:12:01.516 [2024-04-17 15:35:02.735554] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.516 [2024-04-17 15:35:02.735559] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.516 [2024-04-17 15:35:02.735562] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x880300) 00:12:01.516 [2024-04-17 15:35:02.735570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.516 [2024-04-17 15:35:02.735592] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c8de0, cid 3, qid 0 00:12:01.516 [2024-04-17 15:35:02.735663] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.516 [2024-04-17 15:35:02.735670] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.516 [2024-04-17 15:35:02.735674] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.516 [2024-04-17 15:35:02.735678] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c8de0) on tqpair=0x880300 00:12:01.516 [2024-04-17 15:35:02.735683] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:12:01.516 [2024-04-17 15:35:02.735689] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:12:01.516 [2024-04-17 15:35:02.735699] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.516 [2024-04-17 15:35:02.735704] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.516 [2024-04-17 15:35:02.735708] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x880300) 00:12:01.516 [2024-04-17 15:35:02.735715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.516 [2024-04-17 15:35:02.735732] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c8de0, cid 3, qid 0 00:12:01.516 [2024-04-17 15:35:02.735800] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.516 [2024-04-17 15:35:02.735808] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.516 [2024-04-17 15:35:02.735812] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.516 [2024-04-17 15:35:02.735816] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c8de0) on tqpair=0x880300 00:12:01.516 [2024-04-17 15:35:02.735828] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.516 [2024-04-17 15:35:02.735833] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.516 [2024-04-17 15:35:02.735837] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x880300) 00:12:01.516 [2024-04-17 15:35:02.735844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.516 [2024-04-17 15:35:02.735865] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c8de0, cid 3, qid 0 00:12:01.516 [2024-04-17 15:35:02.735912] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.516 [2024-04-17 15:35:02.735918] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.516 [2024-04-17 15:35:02.735922] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.516 [2024-04-17 15:35:02.735926] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c8de0) on tqpair=0x880300 00:12:01.516 [2024-04-17 15:35:02.735937] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.516 [2024-04-17 15:35:02.735942] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.516 [2024-04-17 15:35:02.735945] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x880300) 00:12:01.516 [2024-04-17 15:35:02.735953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.516 [2024-04-17 15:35:02.735970] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c8de0, cid 3, qid 0 00:12:01.516 [2024-04-17 15:35:02.736016] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.516 [2024-04-17 15:35:02.736023] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.516 [2024-04-17 15:35:02.736027] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.516 [2024-04-17 15:35:02.736031] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c8de0) on tqpair=0x880300 00:12:01.516 [2024-04-17 15:35:02.736042] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.516 [2024-04-17 15:35:02.736046] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.516 [2024-04-17 15:35:02.736050] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x880300) 00:12:01.516 [2024-04-17 15:35:02.736057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.516 [2024-04-17 15:35:02.736074] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c8de0, cid 3, qid 0 00:12:01.516 [2024-04-17 15:35:02.736127] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.516 [2024-04-17 15:35:02.736134] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.516 [2024-04-17 15:35:02.736137] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.516 [2024-04-17 15:35:02.736141] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c8de0) on tqpair=0x880300 00:12:01.516 [2024-04-17 15:35:02.736152] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.516 [2024-04-17 15:35:02.736157] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.516 [2024-04-17 15:35:02.736161] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x880300) 00:12:01.516 [2024-04-17 15:35:02.736168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.516 [2024-04-17 15:35:02.736184] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c8de0, cid 3, qid 0 00:12:01.516 [2024-04-17 15:35:02.736234] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.516 [2024-04-17 15:35:02.736241] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.516 [2024-04-17 15:35:02.736245] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.516 [2024-04-17 15:35:02.736249] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c8de0) on tqpair=0x880300 00:12:01.516 [2024-04-17 15:35:02.736260] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.516 [2024-04-17 15:35:02.736264] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.516 [2024-04-17 15:35:02.736268] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x880300) 00:12:01.516 [2024-04-17 15:35:02.736275] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.516 [2024-04-17 15:35:02.736292] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c8de0, cid 3, qid 0 00:12:01.516 [2024-04-17 15:35:02.736341] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.516 [2024-04-17 15:35:02.736348] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.516 [2024-04-17 15:35:02.736352] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.516 [2024-04-17 15:35:02.736356] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c8de0) on tqpair=0x880300 00:12:01.516 [2024-04-17 15:35:02.736366] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.516 [2024-04-17 15:35:02.736371] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.516 [2024-04-17 15:35:02.736375] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x880300) 00:12:01.516 [2024-04-17 15:35:02.736382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.516 [2024-04-17 15:35:02.736399] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c8de0, cid 3, qid 0 00:12:01.516 [2024-04-17 15:35:02.736449] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.516 [2024-04-17 15:35:02.736456] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.516 [2024-04-17 15:35:02.736459] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.516 [2024-04-17 15:35:02.736463] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c8de0) on tqpair=0x880300 00:12:01.516 [2024-04-17 15:35:02.736474] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.516 [2024-04-17 15:35:02.736479] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.516 [2024-04-17 15:35:02.736482] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x880300) 00:12:01.516 [2024-04-17 15:35:02.736490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.516 [2024-04-17 15:35:02.736506] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c8de0, cid 3, qid 0 00:12:01.516 [2024-04-17 15:35:02.736550] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.516 [2024-04-17 15:35:02.736557] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.516 [2024-04-17 15:35:02.736561] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.516 [2024-04-17 15:35:02.736565] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c8de0) on tqpair=0x880300 00:12:01.516 [2024-04-17 15:35:02.736576] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.517 [2024-04-17 15:35:02.736580] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.517 [2024-04-17 15:35:02.736584] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x880300) 00:12:01.517 [2024-04-17 15:35:02.736592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.517 [2024-04-17 15:35:02.736608] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c8de0, cid 3, qid 0 00:12:01.517 [2024-04-17 15:35:02.736659] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.517 [2024-04-17 15:35:02.736666] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.517 [2024-04-17 15:35:02.736669] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.517 [2024-04-17 15:35:02.736673] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c8de0) on tqpair=0x880300 00:12:01.517 [2024-04-17 15:35:02.736684] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.517 [2024-04-17 15:35:02.736689] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.517 [2024-04-17 15:35:02.736692] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x880300) 00:12:01.517 [2024-04-17 15:35:02.736700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.517 [2024-04-17 15:35:02.736716] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c8de0, cid 3, qid 0 00:12:01.517 [2024-04-17 15:35:02.740767] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.517 [2024-04-17 15:35:02.740790] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.517 [2024-04-17 15:35:02.740795] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.517 [2024-04-17 15:35:02.740800] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c8de0) on tqpair=0x880300 00:12:01.517 [2024-04-17 15:35:02.740814] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.517 [2024-04-17 15:35:02.740820] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.517 [2024-04-17 15:35:02.740823] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x880300) 00:12:01.517 [2024-04-17 15:35:02.740832] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.517 [2024-04-17 15:35:02.740857] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c8de0, cid 3, qid 0 00:12:01.517 [2024-04-17 15:35:02.740912] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.517 [2024-04-17 15:35:02.740919] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.517 [2024-04-17 15:35:02.740923] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.517 [2024-04-17 15:35:02.740927] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c8de0) on tqpair=0x880300 00:12:01.517 [2024-04-17 15:35:02.740936] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:12:01.517 00:12:01.517 15:35:02 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:12:01.517 [2024-04-17 15:35:02.778869] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:12:01.517 [2024-04-17 15:35:02.778905] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71673 ] 00:12:01.517 [2024-04-17 15:35:02.914872] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:12:01.517 [2024-04-17 15:35:02.914947] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:12:01.517 [2024-04-17 15:35:02.914954] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:12:01.517 [2024-04-17 15:35:02.914970] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:12:01.517 [2024-04-17 15:35:02.914987] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:12:01.517 [2024-04-17 15:35:02.915152] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:12:01.517 [2024-04-17 15:35:02.915223] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x56e300 0 00:12:01.517 [2024-04-17 15:35:02.919775] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:12:01.517 [2024-04-17 15:35:02.919802] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:12:01.517 [2024-04-17 15:35:02.919808] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:12:01.517 [2024-04-17 15:35:02.919812] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:12:01.517 [2024-04-17 15:35:02.919872] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.517 [2024-04-17 15:35:02.919879] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.517 [2024-04-17 15:35:02.919884] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x56e300) 00:12:01.517 [2024-04-17 15:35:02.919903] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:12:01.517 [2024-04-17 15:35:02.919933] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b69c0, cid 0, qid 0 00:12:01.517 [2024-04-17 15:35:02.926787] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.517 [2024-04-17 15:35:02.926810] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.517 [2024-04-17 15:35:02.926816] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.517 [2024-04-17 15:35:02.926821] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5b69c0) on tqpair=0x56e300 00:12:01.517 [2024-04-17 15:35:02.926837] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:12:01.517 [2024-04-17 15:35:02.926847] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:12:01.517 [2024-04-17 15:35:02.926854] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:12:01.517 [2024-04-17 15:35:02.926873] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.517 [2024-04-17 15:35:02.926878] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.517 [2024-04-17 15:35:02.926882] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x56e300) 00:12:01.517 [2024-04-17 15:35:02.926892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.517 [2024-04-17 15:35:02.926920] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b69c0, cid 0, qid 0 00:12:01.517 [2024-04-17 15:35:02.926985] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.517 [2024-04-17 15:35:02.926992] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.517 [2024-04-17 15:35:02.926996] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.517 [2024-04-17 15:35:02.927000] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5b69c0) on tqpair=0x56e300 00:12:01.517 [2024-04-17 15:35:02.927011] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:12:01.517 [2024-04-17 15:35:02.927020] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:12:01.517 [2024-04-17 15:35:02.927028] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.518 [2024-04-17 15:35:02.927032] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.518 [2024-04-17 15:35:02.927036] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x56e300) 00:12:01.518 [2024-04-17 15:35:02.927044] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.518 [2024-04-17 15:35:02.927062] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b69c0, cid 0, qid 0 00:12:01.518 [2024-04-17 15:35:02.927513] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.518 [2024-04-17 15:35:02.927529] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.518 [2024-04-17 15:35:02.927534] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.518 [2024-04-17 15:35:02.927538] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5b69c0) on tqpair=0x56e300 00:12:01.518 [2024-04-17 15:35:02.927545] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:12:01.518 [2024-04-17 15:35:02.927555] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:12:01.518 [2024-04-17 15:35:02.927563] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.518 [2024-04-17 15:35:02.927568] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.518 [2024-04-17 15:35:02.927572] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x56e300) 00:12:01.518 [2024-04-17 15:35:02.927579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.518 [2024-04-17 15:35:02.927598] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b69c0, cid 0, qid 0 00:12:01.518 [2024-04-17 15:35:02.927652] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.518 [2024-04-17 15:35:02.927659] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.518 [2024-04-17 15:35:02.927663] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.518 [2024-04-17 15:35:02.927667] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5b69c0) on tqpair=0x56e300 00:12:01.518 [2024-04-17 15:35:02.927673] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:01.518 [2024-04-17 15:35:02.927684] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.518 [2024-04-17 15:35:02.927689] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.518 [2024-04-17 15:35:02.927693] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x56e300) 00:12:01.518 [2024-04-17 15:35:02.927700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.518 [2024-04-17 15:35:02.927717] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b69c0, cid 0, qid 0 00:12:01.518 [2024-04-17 15:35:02.928112] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.518 [2024-04-17 15:35:02.928127] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.518 [2024-04-17 15:35:02.928132] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.518 [2024-04-17 15:35:02.928136] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5b69c0) on tqpair=0x56e300 00:12:01.518 [2024-04-17 15:35:02.928142] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:12:01.518 [2024-04-17 15:35:02.928148] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:12:01.518 [2024-04-17 15:35:02.928157] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:01.518 [2024-04-17 15:35:02.928264] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:12:01.518 [2024-04-17 15:35:02.928268] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:01.518 [2024-04-17 15:35:02.928278] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.518 [2024-04-17 15:35:02.928283] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.518 [2024-04-17 15:35:02.928287] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x56e300) 00:12:01.518 [2024-04-17 15:35:02.928294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.518 [2024-04-17 15:35:02.928315] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b69c0, cid 0, qid 0 00:12:01.518 [2024-04-17 15:35:02.928695] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.518 [2024-04-17 15:35:02.928709] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.518 [2024-04-17 15:35:02.928714] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.518 [2024-04-17 15:35:02.928718] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5b69c0) on tqpair=0x56e300 00:12:01.518 [2024-04-17 15:35:02.928724] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:01.518 [2024-04-17 15:35:02.928735] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.518 [2024-04-17 15:35:02.928740] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.518 [2024-04-17 15:35:02.928744] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x56e300) 00:12:01.518 [2024-04-17 15:35:02.928761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.518 [2024-04-17 15:35:02.928782] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b69c0, cid 0, qid 0 00:12:01.518 [2024-04-17 15:35:02.928838] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.518 [2024-04-17 15:35:02.928844] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.518 [2024-04-17 15:35:02.928848] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.518 [2024-04-17 15:35:02.928852] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5b69c0) on tqpair=0x56e300 00:12:01.518 [2024-04-17 15:35:02.928858] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:01.518 [2024-04-17 15:35:02.928863] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:12:01.518 [2024-04-17 15:35:02.928872] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:12:01.518 [2024-04-17 15:35:02.928883] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:12:01.518 [2024-04-17 15:35:02.928894] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.518 [2024-04-17 15:35:02.928898] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x56e300) 00:12:01.518 [2024-04-17 15:35:02.928907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.518 [2024-04-17 15:35:02.928925] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b69c0, cid 0, qid 0 00:12:01.518 [2024-04-17 15:35:02.929438] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:12:01.518 [2024-04-17 15:35:02.929453] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:12:01.518 [2024-04-17 15:35:02.929458] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:12:01.518 [2024-04-17 15:35:02.929463] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x56e300): datao=0, datal=4096, cccid=0 00:12:01.518 [2024-04-17 15:35:02.929468] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5b69c0) on tqpair(0x56e300): expected_datao=0, payload_size=4096 00:12:01.518 [2024-04-17 15:35:02.929474] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.518 [2024-04-17 15:35:02.929483] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:12:01.518 [2024-04-17 15:35:02.929488] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:12:01.518 [2024-04-17 15:35:02.929498] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.518 [2024-04-17 15:35:02.929504] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.518 [2024-04-17 15:35:02.929508] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.518 [2024-04-17 15:35:02.929512] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5b69c0) on tqpair=0x56e300 00:12:01.518 [2024-04-17 15:35:02.929522] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:12:01.518 [2024-04-17 15:35:02.929528] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:12:01.518 [2024-04-17 15:35:02.929533] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:12:01.518 [2024-04-17 15:35:02.929542] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:12:01.518 [2024-04-17 15:35:02.929548] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:12:01.518 [2024-04-17 15:35:02.929553] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:12:01.518 [2024-04-17 15:35:02.929563] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:12:01.518 [2024-04-17 15:35:02.929572] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.518 [2024-04-17 15:35:02.929576] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.518 [2024-04-17 15:35:02.929580] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x56e300) 00:12:01.518 [2024-04-17 15:35:02.929589] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:01.518 [2024-04-17 15:35:02.929609] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b69c0, cid 0, qid 0 00:12:01.518 [2024-04-17 15:35:02.929885] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.518 [2024-04-17 15:35:02.929900] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.518 [2024-04-17 15:35:02.929905] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.518 [2024-04-17 15:35:02.929909] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5b69c0) on tqpair=0x56e300 00:12:01.518 [2024-04-17 15:35:02.929918] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.518 [2024-04-17 15:35:02.929922] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.518 [2024-04-17 15:35:02.929926] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x56e300) 00:12:01.519 [2024-04-17 15:35:02.929934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.519 [2024-04-17 15:35:02.929941] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.519 [2024-04-17 15:35:02.929945] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.519 [2024-04-17 15:35:02.929949] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x56e300) 00:12:01.519 [2024-04-17 15:35:02.929955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.519 [2024-04-17 15:35:02.929962] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.519 [2024-04-17 15:35:02.929966] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.519 [2024-04-17 15:35:02.929970] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x56e300) 00:12:01.519 [2024-04-17 15:35:02.929976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.519 [2024-04-17 15:35:02.929983] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.519 [2024-04-17 15:35:02.929987] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.519 [2024-04-17 15:35:02.929991] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56e300) 00:12:01.519 [2024-04-17 15:35:02.929997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.519 [2024-04-17 15:35:02.930002] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:01.519 [2024-04-17 15:35:02.930016] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:01.519 [2024-04-17 15:35:02.930024] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.519 [2024-04-17 15:35:02.930028] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x56e300) 00:12:01.519 [2024-04-17 15:35:02.930036] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.519 [2024-04-17 15:35:02.930059] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b69c0, cid 0, qid 0 00:12:01.519 [2024-04-17 15:35:02.930067] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b6b20, cid 1, qid 0 00:12:01.519 [2024-04-17 15:35:02.930072] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b6c80, cid 2, qid 0 00:12:01.519 [2024-04-17 15:35:02.930077] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b6de0, cid 3, qid 0 00:12:01.519 [2024-04-17 15:35:02.930082] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b6f40, cid 4, qid 0 00:12:01.519 [2024-04-17 15:35:02.930709] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.519 [2024-04-17 15:35:02.930717] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.519 [2024-04-17 15:35:02.930721] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.519 [2024-04-17 15:35:02.930725] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5b6f40) on tqpair=0x56e300 00:12:01.519 [2024-04-17 15:35:02.930732] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:12:01.519 [2024-04-17 15:35:02.930738] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:01.519 [2024-04-17 15:35:02.930747] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:12:01.519 [2024-04-17 15:35:02.934767] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:01.519 [2024-04-17 15:35:02.934786] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.519 [2024-04-17 15:35:02.934792] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.519 [2024-04-17 15:35:02.934796] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x56e300) 00:12:01.519 [2024-04-17 15:35:02.934806] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:01.519 [2024-04-17 15:35:02.934836] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b6f40, cid 4, qid 0 00:12:01.519 [2024-04-17 15:35:02.935370] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.519 [2024-04-17 15:35:02.935383] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.519 [2024-04-17 15:35:02.935387] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.519 [2024-04-17 15:35:02.935392] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5b6f40) on tqpair=0x56e300 00:12:01.519 [2024-04-17 15:35:02.935445] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:12:01.519 [2024-04-17 15:35:02.935457] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:01.519 [2024-04-17 15:35:02.935466] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.519 [2024-04-17 15:35:02.935471] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x56e300) 00:12:01.519 [2024-04-17 15:35:02.935479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.519 [2024-04-17 15:35:02.935499] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b6f40, cid 4, qid 0 00:12:01.519 [2024-04-17 15:35:02.935954] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:12:01.519 [2024-04-17 15:35:02.935967] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:12:01.519 [2024-04-17 15:35:02.935972] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:12:01.519 [2024-04-17 15:35:02.935976] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x56e300): datao=0, datal=4096, cccid=4 00:12:01.519 [2024-04-17 15:35:02.935981] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5b6f40) on tqpair(0x56e300): expected_datao=0, payload_size=4096 00:12:01.519 [2024-04-17 15:35:02.935986] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.519 [2024-04-17 15:35:02.935994] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:12:01.519 [2024-04-17 15:35:02.935998] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:12:01.519 [2024-04-17 15:35:02.936473] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.519 [2024-04-17 15:35:02.936482] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.519 [2024-04-17 15:35:02.936487] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.519 [2024-04-17 15:35:02.936491] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5b6f40) on tqpair=0x56e300 00:12:01.519 [2024-04-17 15:35:02.936504] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:12:01.519 [2024-04-17 15:35:02.936521] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:12:01.519 [2024-04-17 15:35:02.936534] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:12:01.519 [2024-04-17 15:35:02.936542] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.519 [2024-04-17 15:35:02.936546] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x56e300) 00:12:01.519 [2024-04-17 15:35:02.936554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.519 [2024-04-17 15:35:02.936576] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b6f40, cid 4, qid 0 00:12:01.519 [2024-04-17 15:35:02.937158] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:12:01.519 [2024-04-17 15:35:02.940783] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:12:01.519 [2024-04-17 15:35:02.940800] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:12:01.519 [2024-04-17 15:35:02.940806] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x56e300): datao=0, datal=4096, cccid=4 00:12:01.519 [2024-04-17 15:35:02.940811] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5b6f40) on tqpair(0x56e300): expected_datao=0, payload_size=4096 00:12:01.519 [2024-04-17 15:35:02.940816] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.519 [2024-04-17 15:35:02.940824] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:12:01.519 [2024-04-17 15:35:02.940829] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:12:01.519 [2024-04-17 15:35:02.940841] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.519 [2024-04-17 15:35:02.940848] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.519 [2024-04-17 15:35:02.940851] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.519 [2024-04-17 15:35:02.940856] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5b6f40) on tqpair=0x56e300 00:12:01.519 [2024-04-17 15:35:02.940878] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:01.519 [2024-04-17 15:35:02.940892] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:01.519 [2024-04-17 15:35:02.940903] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.519 [2024-04-17 15:35:02.940908] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x56e300) 00:12:01.519 [2024-04-17 15:35:02.940917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.519 [2024-04-17 15:35:02.940946] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b6f40, cid 4, qid 0 00:12:01.519 [2024-04-17 15:35:02.941035] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:12:01.519 [2024-04-17 15:35:02.941042] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:12:01.519 [2024-04-17 15:35:02.941046] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:12:01.519 [2024-04-17 15:35:02.941050] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x56e300): datao=0, datal=4096, cccid=4 00:12:01.519 [2024-04-17 15:35:02.941055] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5b6f40) on tqpair(0x56e300): expected_datao=0, payload_size=4096 00:12:01.519 [2024-04-17 15:35:02.941060] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.519 [2024-04-17 15:35:02.941067] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:12:01.519 [2024-04-17 15:35:02.941072] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:12:01.519 [2024-04-17 15:35:02.941080] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.519 [2024-04-17 15:35:02.941087] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.519 [2024-04-17 15:35:02.941091] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.519 [2024-04-17 15:35:02.941095] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5b6f40) on tqpair=0x56e300 00:12:01.520 [2024-04-17 15:35:02.941120] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:01.520 [2024-04-17 15:35:02.941130] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:12:01.520 [2024-04-17 15:35:02.941142] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:12:01.520 [2024-04-17 15:35:02.941150] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:01.520 [2024-04-17 15:35:02.941156] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:12:01.520 [2024-04-17 15:35:02.941164] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:12:01.520 [2024-04-17 15:35:02.941169] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:12:01.520 [2024-04-17 15:35:02.941175] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:12:01.520 [2024-04-17 15:35:02.941193] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.520 [2024-04-17 15:35:02.941198] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x56e300) 00:12:01.520 [2024-04-17 15:35:02.941207] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.520 [2024-04-17 15:35:02.941214] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.520 [2024-04-17 15:35:02.941219] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.520 [2024-04-17 15:35:02.941223] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x56e300) 00:12:01.520 [2024-04-17 15:35:02.941229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.520 [2024-04-17 15:35:02.941256] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b6f40, cid 4, qid 0 00:12:01.520 [2024-04-17 15:35:02.941263] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b70a0, cid 5, qid 0 00:12:01.520 [2024-04-17 15:35:02.941335] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.520 [2024-04-17 15:35:02.941342] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.520 [2024-04-17 15:35:02.941346] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.520 [2024-04-17 15:35:02.941350] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5b6f40) on tqpair=0x56e300 00:12:01.520 [2024-04-17 15:35:02.941358] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.520 [2024-04-17 15:35:02.941365] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.520 [2024-04-17 15:35:02.941369] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.520 [2024-04-17 15:35:02.941373] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5b70a0) on tqpair=0x56e300 00:12:01.520 [2024-04-17 15:35:02.941384] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.520 [2024-04-17 15:35:02.941388] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x56e300) 00:12:01.520 [2024-04-17 15:35:02.941396] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.520 [2024-04-17 15:35:02.941414] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b70a0, cid 5, qid 0 00:12:01.520 [2024-04-17 15:35:02.941458] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.520 [2024-04-17 15:35:02.941465] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.520 [2024-04-17 15:35:02.941468] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.520 [2024-04-17 15:35:02.941473] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5b70a0) on tqpair=0x56e300 00:12:01.520 [2024-04-17 15:35:02.941483] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.520 [2024-04-17 15:35:02.941488] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x56e300) 00:12:01.520 [2024-04-17 15:35:02.941495] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.520 [2024-04-17 15:35:02.941511] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b70a0, cid 5, qid 0 00:12:01.520 [2024-04-17 15:35:02.941568] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.520 [2024-04-17 15:35:02.941576] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.520 [2024-04-17 15:35:02.941581] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.520 [2024-04-17 15:35:02.941585] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5b70a0) on tqpair=0x56e300 00:12:01.520 [2024-04-17 15:35:02.941596] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.520 [2024-04-17 15:35:02.941600] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x56e300) 00:12:01.520 [2024-04-17 15:35:02.941608] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.520 [2024-04-17 15:35:02.941624] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b70a0, cid 5, qid 0 00:12:01.520 [2024-04-17 15:35:02.941679] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.520 [2024-04-17 15:35:02.941686] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.520 [2024-04-17 15:35:02.941690] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.520 [2024-04-17 15:35:02.941694] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5b70a0) on tqpair=0x56e300 00:12:01.520 [2024-04-17 15:35:02.941709] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.520 [2024-04-17 15:35:02.941713] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x56e300) 00:12:01.520 [2024-04-17 15:35:02.941721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.520 [2024-04-17 15:35:02.941729] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.520 [2024-04-17 15:35:02.941733] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x56e300) 00:12:01.520 [2024-04-17 15:35:02.941740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.520 [2024-04-17 15:35:02.941748] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.520 [2024-04-17 15:35:02.941752] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x56e300) 00:12:01.520 [2024-04-17 15:35:02.941759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.520 ===================================================== 00:12:01.520 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:01.520 ===================================================== 00:12:01.520 Controller Capabilities/Features 00:12:01.520 ================================ 00:12:01.520 Vendor ID: 8086 00:12:01.520 Subsystem Vendor ID: 8086 00:12:01.520 Serial Number: SPDK00000000000001 00:12:01.520 Model Number: SPDK bdev Controller 00:12:01.520 Firmware Version: 24.05 00:12:01.520 Recommended Arb Burst: 6 00:12:01.520 IEEE OUI Identifier: e4 d2 5c 00:12:01.520 Multi-path I/O 00:12:01.520 May have multiple subsystem ports: Yes 00:12:01.520 May have multiple controllers: Yes 00:12:01.520 Associated with SR-IOV VF: No 00:12:01.520 Max Data Transfer Size: 131072 00:12:01.520 Max Number of Namespaces: 32 00:12:01.520 Max Number of I/O Queues: 127 00:12:01.520 NVMe Specification Version (VS): 1.3 00:12:01.520 NVMe Specification Version (Identify): 1.3 00:12:01.520 Maximum Queue Entries: 128 00:12:01.520 Contiguous Queues Required: Yes 00:12:01.520 Arbitration Mechanisms Supported 00:12:01.520 Weighted Round Robin: Not Supported 00:12:01.520 Vendor Specific: Not Supported 00:12:01.520 Reset Timeout: 15000 ms 00:12:01.520 Doorbell Stride: 4 bytes 00:12:01.520 NVM Subsystem Reset: Not Supported 00:12:01.520 Command Sets Supported 00:12:01.520 NVM Command Set: Supported 00:12:01.520 Boot Partition: Not Supported 00:12:01.520 Memory Page Size Minimum: 4096 bytes 00:12:01.520 Memory Page Size Maximum: 4096 bytes 00:12:01.520 Persistent Memory Region: Not Supported 00:12:01.520 Optional Asynchronous Events Supported 00:12:01.520 Namespace Attribute Notices: Supported 00:12:01.520 Firmware Activation Notices: Not Supported 00:12:01.520 ANA Change Notices: Not Supported 00:12:01.520 PLE Aggregate Log Change Notices: Not Supported 00:12:01.520 LBA Status Info Alert Notices: Not Supported 00:12:01.520 EGE Aggregate Log Change Notices: Not Supported 00:12:01.520 Normal NVM Subsystem Shutdown event: Not Supported 00:12:01.520 Zone Descriptor Change Notices: Not Supported 00:12:01.520 Discovery Log Change Notices: Not Supported 00:12:01.520 Controller Attributes 00:12:01.520 128-bit Host Identifier: Supported 00:12:01.520 Non-Operational Permissive Mode: Not Supported 00:12:01.520 NVM Sets: Not Supported 00:12:01.520 Read Recovery Levels: Not Supported 00:12:01.520 Endurance Groups: Not Supported 00:12:01.520 Predictable Latency Mode: Not Supported 00:12:01.520 Traffic Based Keep ALive: Not Supported 00:12:01.520 Namespace Granularity: Not Supported 00:12:01.520 SQ Associations: Not Supported 00:12:01.520 UUID List: Not Supported 00:12:01.520 Multi-Domain Subsystem: Not Supported 00:12:01.520 Fixed Capacity Management: Not Supported 00:12:01.520 Variable Capacity Management: Not Supported 00:12:01.520 Delete Endurance Group: Not Supported 00:12:01.520 Delete NVM Set: Not Supported 00:12:01.520 Extended LBA Formats Supported: Not Supported 00:12:01.520 Flexible Data Placement Supported: Not Supported 00:12:01.520 00:12:01.520 Controller Memory Buffer Support 00:12:01.520 ================================ 00:12:01.521 Supported: No 00:12:01.521 00:12:01.521 Persistent Memory Region Support 00:12:01.521 ================================ 00:12:01.521 Supported: No 00:12:01.521 00:12:01.521 Admin Command Set Attributes 00:12:01.521 ============================ 00:12:01.521 Security Send/Receive: Not Supported 00:12:01.521 Format NVM: Not Supported 00:12:01.521 Firmware Activate/Download: Not Supported 00:12:01.521 Namespace Management: Not Supported 00:12:01.521 Device Self-Test: Not Supported 00:12:01.521 Directives: Not Supported 00:12:01.521 NVMe-MI: Not Supported 00:12:01.521 Virtualization Management: Not Supported 00:12:01.521 Doorbell Buffer Config: Not Supported 00:12:01.521 Get LBA Status Capability: Not Supported 00:12:01.521 Command & Feature Lockdown Capability: Not Supported 00:12:01.521 Abort Command Limit: 4 00:12:01.521 Async Event Request Limit: 4 00:12:01.521 Number of Firmware Slots: N/A 00:12:01.521 Firmware Slot 1 Read-Only: N/A 00:12:01.521 Firmware Activation Without Reset: N/A 00:12:01.521 Multiple Update Detection Support: N/A 00:12:01.521 Firmware Update Granularity: No Information Provided 00:12:01.521 Per-Namespace SMART Log: No 00:12:01.521 Asymmetric Namespace Access Log Page: Not Supported 00:12:01.521 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:12:01.521 Command Effects Log Page: Supported 00:12:01.521 Get Log Page Extended Data: Supported 00:12:01.521 Telemetry Log Pages: Not Supported 00:12:01.521 Persistent Event Log Pages: Not Supported 00:12:01.521 Supported Log Pages Log Page: May Support 00:12:01.521 Commands Supported & Effects Log Page: Not Supported 00:12:01.521 Feature Identifiers & Effects Log Page:May Support 00:12:01.521 NVMe-MI Commands & Effects Log Page: May Support 00:12:01.521 Data Area 4 for Telemetry Log: Not Supported 00:12:01.521 Error Log Page Entries Supported: 128 00:12:01.521 Keep Alive: Supported 00:12:01.521 Keep Alive Granularity: 10000 ms 00:12:01.521 00:12:01.521 NVM Command Set Attributes 00:12:01.521 ========================== 00:12:01.521 Submission Queue Entry Size 00:12:01.521 Max: 64 00:12:01.521 Min: 64 00:12:01.521 Completion Queue Entry Size 00:12:01.521 Max: 16 00:12:01.521 Min: 16 00:12:01.521 Number of Namespaces: 32 00:12:01.521 Compare Command: Supported 00:12:01.521 Write Uncorrectable Command: Not Supported 00:12:01.521 Dataset Management Command: Supported 00:12:01.521 Write Zeroes Command: Supported 00:12:01.521 Set Features Save Field: Not Supported 00:12:01.521 Reservations: Supported 00:12:01.521 Timestamp: Not Supported 00:12:01.521 Copy: Supported 00:12:01.521 Volatile Write Cache: Present 00:12:01.521 Atomic Write Unit (Normal): 1 00:12:01.521 Atomic Write Unit (PFail): 1 00:12:01.521 Atomic Compare & Write Unit: 1 00:12:01.521 Fused Compare & Write: Supported 00:12:01.521 Scatter-Gather List 00:12:01.521 SGL Command Set: Supported 00:12:01.521 SGL Keyed: Supported 00:12:01.521 SGL Bit Bucket Descriptor: Not Supported 00:12:01.521 SGL Metadata Pointer: Not Supported 00:12:01.521 Oversized SGL: Not Supported 00:12:01.521 SGL Metadata Address: Not Supported 00:12:01.521 SGL Offset: Supported 00:12:01.521 Transport SGL Data Block: Not Supported 00:12:01.521 Replay Protected Memory Block: Not Supported 00:12:01.521 00:12:01.521 Firmware Slot Information 00:12:01.521 ========================= 00:12:01.521 Active slot: 1 00:12:01.521 Slot 1 Firmware Revision: 24.05 00:12:01.521 00:12:01.521 00:12:01.521 Commands Supported and Effects 00:12:01.521 ============================== 00:12:01.521 Admin Commands 00:12:01.521 -------------- 00:12:01.521 Get Log Page (02h): Supported 00:12:01.521 Identify (06h): Supported 00:12:01.521 Abort (08h): Supported 00:12:01.521 Set Features (09h): Supported 00:12:01.521 Get Features (0Ah): Supported 00:12:01.521 Asynchronous Event Request (0Ch): Supported 00:12:01.521 Keep Alive (18h): Supported 00:12:01.521 I/O Commands 00:12:01.521 ------------ 00:12:01.521 Flush (00h): Supported LBA-Change 00:12:01.521 Write (01h): Supported LBA-Change 00:12:01.521 Read (02h): Supported 00:12:01.521 Compare (05h): Supported 00:12:01.521 Write Zeroes (08h): Supported LBA-Change 00:12:01.521 Dataset Management (09h): Supported LBA-Change 00:12:01.521 Copy (19h): Supported LBA-Change 00:12:01.521 Unknown (79h): Supported LBA-Change 00:12:01.521 Unknown (7Ah): Supported 00:12:01.521 00:12:01.521 Error Log 00:12:01.521 ========= 00:12:01.521 00:12:01.521 Arbitration 00:12:01.521 =========== 00:12:01.521 Arbitration Burst: 1 00:12:01.521 00:12:01.521 Power Management 00:12:01.521 ================ 00:12:01.521 Number of Power States: 1 00:12:01.521 Current Power State: Power State #0 00:12:01.521 Power State #0: 00:12:01.521 Max Power: 0.00 W 00:12:01.521 Non-Operational State: Operational 00:12:01.521 Entry Latency: Not Reported 00:12:01.521 Exit Latency: Not Reported 00:12:01.521 Relative Read Throughput: 0 00:12:01.521 Relative Read Latency: 0 00:12:01.521 Relative Write Throughput: 0 00:12:01.521 Relative Write Latency: 0 00:12:01.521 Idle Power: Not Reported 00:12:01.521 Active Power: Not Reported 00:12:01.521 Non-Operational Permissive Mode: Not Supported 00:12:01.521 00:12:01.521 Health Information 00:12:01.521 ================== 00:12:01.521 Critical Warnings: 00:12:01.521 Available Spare Space: OK 00:12:01.521 Temperature: OK 00:12:01.521 Device Reliability: OK 00:12:01.521 Read Only: No 00:12:01.521 Volatile Memory Backup: OK 00:12:01.521 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:01.521 Temperature Threshold: [2024-04-17 15:35:02.942007] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.521 [2024-04-17 15:35:02.942025] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x56e300) 00:12:01.521 [2024-04-17 15:35:02.942035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.521 [2024-04-17 15:35:02.942067] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b70a0, cid 5, qid 0 00:12:01.521 [2024-04-17 15:35:02.942075] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b6f40, cid 4, qid 0 00:12:01.521 [2024-04-17 15:35:02.942080] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b7200, cid 6, qid 0 00:12:01.521 [2024-04-17 15:35:02.942085] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b7360, cid 7, qid 0 00:12:01.521 [2024-04-17 15:35:02.942367] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:12:01.521 [2024-04-17 15:35:02.942374] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:12:01.521 [2024-04-17 15:35:02.942378] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:12:01.521 [2024-04-17 15:35:02.942383] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x56e300): datao=0, datal=8192, cccid=5 00:12:01.521 [2024-04-17 15:35:02.942388] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5b70a0) on tqpair(0x56e300): expected_datao=0, payload_size=8192 00:12:01.521 [2024-04-17 15:35:02.942393] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.521 [2024-04-17 15:35:02.942411] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:12:01.521 [2024-04-17 15:35:02.942416] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:12:01.521 [2024-04-17 15:35:02.942422] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:12:01.521 [2024-04-17 15:35:02.942428] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:12:01.521 [2024-04-17 15:35:02.942432] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:12:01.521 [2024-04-17 15:35:02.942436] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x56e300): datao=0, datal=512, cccid=4 00:12:01.521 [2024-04-17 15:35:02.942441] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5b6f40) on tqpair(0x56e300): expected_datao=0, payload_size=512 00:12:01.521 [2024-04-17 15:35:02.942446] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.521 [2024-04-17 15:35:02.942452] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:12:01.521 [2024-04-17 15:35:02.942457] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:12:01.521 [2024-04-17 15:35:02.942463] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:12:01.521 [2024-04-17 15:35:02.942469] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:12:01.521 [2024-04-17 15:35:02.942473] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:12:01.521 [2024-04-17 15:35:02.942476] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x56e300): datao=0, datal=512, cccid=6 00:12:01.521 [2024-04-17 15:35:02.942481] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5b7200) on tqpair(0x56e300): expected_datao=0, payload_size=512 00:12:01.521 [2024-04-17 15:35:02.942486] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.521 [2024-04-17 15:35:02.942493] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:12:01.521 [2024-04-17 15:35:02.942497] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:12:01.522 [2024-04-17 15:35:02.942503] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:12:01.522 [2024-04-17 15:35:02.942509] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:12:01.522 [2024-04-17 15:35:02.942513] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:12:01.522 [2024-04-17 15:35:02.942516] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x56e300): datao=0, datal=4096, cccid=7 00:12:01.522 [2024-04-17 15:35:02.942521] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5b7360) on tqpair(0x56e300): expected_datao=0, payload_size=4096 00:12:01.522 [2024-04-17 15:35:02.942526] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.522 [2024-04-17 15:35:02.942536] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:12:01.522 [2024-04-17 15:35:02.942540] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:12:01.522 [2024-04-17 15:35:02.942561] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.522 [2024-04-17 15:35:02.942568] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.522 [2024-04-17 15:35:02.942572] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.522 [2024-04-17 15:35:02.942576] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5b70a0) on tqpair=0x56e300 00:12:01.522 [2024-04-17 15:35:02.942596] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.522 [2024-04-17 15:35:02.942604] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.522 [2024-04-17 15:35:02.942608] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.522 [2024-04-17 15:35:02.942612] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5b6f40) on tqpair=0x56e300 00:12:01.522 [2024-04-17 15:35:02.942623] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.522 [2024-04-17 15:35:02.942630] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.522 [2024-04-17 15:35:02.942634] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.522 [2024-04-17 15:35:02.942638] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5b7200) on tqpair=0x56e300 00:12:01.522 [2024-04-17 15:35:02.942646] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.522 [2024-04-17 15:35:02.942662] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.522 [2024-04-17 15:35:02.942666] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.522 [2024-04-17 15:35:02.942670] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5b7360) on tqpair=0x56e300 00:12:01.522 [2024-04-17 15:35:02.942806] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.522 [2024-04-17 15:35:02.942814] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x56e300) 00:12:01.522 [2024-04-17 15:35:02.942823] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.522 [2024-04-17 15:35:02.942849] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b7360, cid 7, qid 0 00:12:01.522 [2024-04-17 15:35:02.942910] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.522 [2024-04-17 15:35:02.942917] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.522 [2024-04-17 15:35:02.942921] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.522 [2024-04-17 15:35:02.942926] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5b7360) on tqpair=0x56e300 00:12:01.522 [2024-04-17 15:35:02.942964] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:12:01.522 [2024-04-17 15:35:02.942979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.522 [2024-04-17 15:35:02.942987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.522 [2024-04-17 15:35:02.942993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.522 [2024-04-17 15:35:02.943000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.522 [2024-04-17 15:35:02.943009] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.522 [2024-04-17 15:35:02.943014] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.522 [2024-04-17 15:35:02.943018] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56e300) 00:12:01.522 [2024-04-17 15:35:02.943026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.522 [2024-04-17 15:35:02.943049] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b6de0, cid 3, qid 0 00:12:01.522 [2024-04-17 15:35:02.943096] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.522 [2024-04-17 15:35:02.943103] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.522 [2024-04-17 15:35:02.943108] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.522 [2024-04-17 15:35:02.943112] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5b6de0) on tqpair=0x56e300 00:12:01.522 [2024-04-17 15:35:02.943121] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.522 [2024-04-17 15:35:02.943126] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.522 [2024-04-17 15:35:02.943130] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56e300) 00:12:01.522 [2024-04-17 15:35:02.943138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.522 [2024-04-17 15:35:02.943159] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b6de0, cid 3, qid 0 00:12:01.522 [2024-04-17 15:35:02.943232] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.522 [2024-04-17 15:35:02.943239] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.522 [2024-04-17 15:35:02.943243] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.522 [2024-04-17 15:35:02.943247] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5b6de0) on tqpair=0x56e300 00:12:01.522 [2024-04-17 15:35:02.943253] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:12:01.522 [2024-04-17 15:35:02.943259] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:12:01.522 [2024-04-17 15:35:02.943269] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.522 [2024-04-17 15:35:02.943274] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.522 [2024-04-17 15:35:02.943278] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56e300) 00:12:01.522 [2024-04-17 15:35:02.943286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.522 [2024-04-17 15:35:02.943302] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b6de0, cid 3, qid 0 00:12:01.522 [2024-04-17 15:35:02.943366] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.522 [2024-04-17 15:35:02.943373] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.522 [2024-04-17 15:35:02.943377] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.522 [2024-04-17 15:35:02.943381] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5b6de0) on tqpair=0x56e300 00:12:01.522 [2024-04-17 15:35:02.943393] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.522 [2024-04-17 15:35:02.943397] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.522 [2024-04-17 15:35:02.943401] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56e300) 00:12:01.522 [2024-04-17 15:35:02.943409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.522 [2024-04-17 15:35:02.943425] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b6de0, cid 3, qid 0 00:12:01.522 [2024-04-17 15:35:02.943470] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.522 [2024-04-17 15:35:02.943476] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.522 [2024-04-17 15:35:02.943480] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.522 [2024-04-17 15:35:02.943484] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5b6de0) on tqpair=0x56e300 00:12:01.522 [2024-04-17 15:35:02.943495] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.522 [2024-04-17 15:35:02.943499] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.522 [2024-04-17 15:35:02.943503] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56e300) 00:12:01.522 [2024-04-17 15:35:02.943512] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.522 [2024-04-17 15:35:02.943528] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b6de0, cid 3, qid 0 00:12:01.522 [2024-04-17 15:35:02.943576] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.522 [2024-04-17 15:35:02.943583] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.523 [2024-04-17 15:35:02.943587] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.523 [2024-04-17 15:35:02.943591] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5b6de0) on tqpair=0x56e300 00:12:01.523 [2024-04-17 15:35:02.943602] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.523 [2024-04-17 15:35:02.943606] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.523 [2024-04-17 15:35:02.943610] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56e300) 00:12:01.523 [2024-04-17 15:35:02.943618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.523 [2024-04-17 15:35:02.943634] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b6de0, cid 3, qid 0 00:12:01.523 [2024-04-17 15:35:02.943681] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.523 [2024-04-17 15:35:02.943688] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.523 [2024-04-17 15:35:02.943692] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.523 [2024-04-17 15:35:02.943696] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5b6de0) on tqpair=0x56e300 00:12:01.523 [2024-04-17 15:35:02.943706] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.523 [2024-04-17 15:35:02.943711] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.523 [2024-04-17 15:35:02.943715] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56e300) 00:12:01.523 [2024-04-17 15:35:02.943723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.523 [2024-04-17 15:35:02.943738] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b6de0, cid 3, qid 0 00:12:01.523 [2024-04-17 15:35:02.943819] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.523 [2024-04-17 15:35:02.943827] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.523 [2024-04-17 15:35:02.943832] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.523 [2024-04-17 15:35:02.943836] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5b6de0) on tqpair=0x56e300 00:12:01.523 [2024-04-17 15:35:02.943848] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.523 [2024-04-17 15:35:02.943853] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.523 [2024-04-17 15:35:02.943857] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56e300) 00:12:01.523 [2024-04-17 15:35:02.943865] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.523 [2024-04-17 15:35:02.943884] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b6de0, cid 3, qid 0 00:12:01.523 [2024-04-17 15:35:02.943938] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.523 [2024-04-17 15:35:02.943945] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.523 [2024-04-17 15:35:02.943949] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.523 [2024-04-17 15:35:02.943954] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5b6de0) on tqpair=0x56e300 00:12:01.523 [2024-04-17 15:35:02.943965] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.523 [2024-04-17 15:35:02.943969] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.523 [2024-04-17 15:35:02.943973] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56e300) 00:12:01.523 [2024-04-17 15:35:02.943982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.523 [2024-04-17 15:35:02.943999] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b6de0, cid 3, qid 0 00:12:01.523 [2024-04-17 15:35:02.944050] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.523 [2024-04-17 15:35:02.944057] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.523 [2024-04-17 15:35:02.944061] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.523 [2024-04-17 15:35:02.944065] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5b6de0) on tqpair=0x56e300 00:12:01.523 [2024-04-17 15:35:02.944076] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.523 [2024-04-17 15:35:02.944081] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.523 [2024-04-17 15:35:02.944085] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56e300) 00:12:01.523 [2024-04-17 15:35:02.944092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.523 [2024-04-17 15:35:02.944109] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b6de0, cid 3, qid 0 00:12:01.523 [2024-04-17 15:35:02.944158] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.523 [2024-04-17 15:35:02.944165] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.523 [2024-04-17 15:35:02.944169] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.523 [2024-04-17 15:35:02.944173] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5b6de0) on tqpair=0x56e300 00:12:01.523 [2024-04-17 15:35:02.944184] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.523 [2024-04-17 15:35:02.944189] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.523 [2024-04-17 15:35:02.944193] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56e300) 00:12:01.523 [2024-04-17 15:35:02.944201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.523 [2024-04-17 15:35:02.944232] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b6de0, cid 3, qid 0 00:12:01.523 [2024-04-17 15:35:02.944283] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.523 [2024-04-17 15:35:02.944290] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.523 [2024-04-17 15:35:02.944294] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.523 [2024-04-17 15:35:02.944298] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5b6de0) on tqpair=0x56e300 00:12:01.523 [2024-04-17 15:35:02.944308] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.523 [2024-04-17 15:35:02.944313] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.523 [2024-04-17 15:35:02.944317] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56e300) 00:12:01.523 [2024-04-17 15:35:02.944324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.523 [2024-04-17 15:35:02.944340] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b6de0, cid 3, qid 0 00:12:01.523 [2024-04-17 15:35:02.944388] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.523 [2024-04-17 15:35:02.944395] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.523 [2024-04-17 15:35:02.944399] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.523 [2024-04-17 15:35:02.944403] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5b6de0) on tqpair=0x56e300 00:12:01.523 [2024-04-17 15:35:02.944414] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.523 [2024-04-17 15:35:02.944418] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.523 [2024-04-17 15:35:02.944422] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56e300) 00:12:01.523 [2024-04-17 15:35:02.944430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.523 [2024-04-17 15:35:02.944447] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b6de0, cid 3, qid 0 00:12:01.523 [2024-04-17 15:35:02.944498] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.523 [2024-04-17 15:35:02.944504] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.523 [2024-04-17 15:35:02.944508] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.523 [2024-04-17 15:35:02.944513] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5b6de0) on tqpair=0x56e300 00:12:01.523 [2024-04-17 15:35:02.944523] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.523 [2024-04-17 15:35:02.944528] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.523 [2024-04-17 15:35:02.944532] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56e300) 00:12:01.523 [2024-04-17 15:35:02.944539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.523 [2024-04-17 15:35:02.944555] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b6de0, cid 3, qid 0 00:12:01.523 [2024-04-17 15:35:02.944600] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.523 [2024-04-17 15:35:02.944607] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.523 [2024-04-17 15:35:02.944611] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.523 [2024-04-17 15:35:02.944615] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5b6de0) on tqpair=0x56e300 00:12:01.523 [2024-04-17 15:35:02.944625] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.523 [2024-04-17 15:35:02.944630] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.523 [2024-04-17 15:35:02.944633] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56e300) 00:12:01.523 [2024-04-17 15:35:02.944641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.523 [2024-04-17 15:35:02.944657] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b6de0, cid 3, qid 0 00:12:01.523 [2024-04-17 15:35:02.944702] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.523 [2024-04-17 15:35:02.944709] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.523 [2024-04-17 15:35:02.944713] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.523 [2024-04-17 15:35:02.944717] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5b6de0) on tqpair=0x56e300 00:12:01.523 [2024-04-17 15:35:02.944727] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.523 [2024-04-17 15:35:02.944738] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.523 [2024-04-17 15:35:02.944742] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56e300) 00:12:01.523 [2024-04-17 15:35:02.944749] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.523 [2024-04-17 15:35:02.944791] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b6de0, cid 3, qid 0 00:12:01.523 [2024-04-17 15:35:02.944848] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.523 [2024-04-17 15:35:02.944856] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.524 [2024-04-17 15:35:02.944860] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.524 [2024-04-17 15:35:02.944864] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5b6de0) on tqpair=0x56e300 00:12:01.524 [2024-04-17 15:35:02.944875] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.524 [2024-04-17 15:35:02.944880] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.524 [2024-04-17 15:35:02.944884] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56e300) 00:12:01.524 [2024-04-17 15:35:02.944893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.524 [2024-04-17 15:35:02.944912] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b6de0, cid 3, qid 0 00:12:01.524 [2024-04-17 15:35:02.944962] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.524 [2024-04-17 15:35:02.944969] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.524 [2024-04-17 15:35:02.944973] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.524 [2024-04-17 15:35:02.944977] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5b6de0) on tqpair=0x56e300 00:12:01.524 [2024-04-17 15:35:02.944988] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.524 [2024-04-17 15:35:02.944993] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.524 [2024-04-17 15:35:02.944997] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56e300) 00:12:01.524 [2024-04-17 15:35:02.945005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.524 [2024-04-17 15:35:02.945022] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b6de0, cid 3, qid 0 00:12:01.524 [2024-04-17 15:35:02.945077] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.524 [2024-04-17 15:35:02.945084] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.524 [2024-04-17 15:35:02.945088] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.524 [2024-04-17 15:35:02.945092] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5b6de0) on tqpair=0x56e300 00:12:01.524 [2024-04-17 15:35:02.945103] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.524 [2024-04-17 15:35:02.945108] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.524 [2024-04-17 15:35:02.945112] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56e300) 00:12:01.524 [2024-04-17 15:35:02.945119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.524 [2024-04-17 15:35:02.945136] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b6de0, cid 3, qid 0 00:12:01.524 [2024-04-17 15:35:02.945200] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.524 [2024-04-17 15:35:02.945207] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.524 [2024-04-17 15:35:02.945211] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.524 [2024-04-17 15:35:02.945215] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5b6de0) on tqpair=0x56e300 00:12:01.524 [2024-04-17 15:35:02.945226] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.524 [2024-04-17 15:35:02.945230] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.524 [2024-04-17 15:35:02.945234] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56e300) 00:12:01.524 [2024-04-17 15:35:02.945242] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.524 [2024-04-17 15:35:02.945258] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b6de0, cid 3, qid 0 00:12:01.524 [2024-04-17 15:35:02.945303] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.524 [2024-04-17 15:35:02.945310] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.524 [2024-04-17 15:35:02.945314] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.524 [2024-04-17 15:35:02.945318] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5b6de0) on tqpair=0x56e300 00:12:01.524 [2024-04-17 15:35:02.945328] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.524 [2024-04-17 15:35:02.945333] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.524 [2024-04-17 15:35:02.945337] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56e300) 00:12:01.524 [2024-04-17 15:35:02.945345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.524 [2024-04-17 15:35:02.945362] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b6de0, cid 3, qid 0 00:12:01.524 [2024-04-17 15:35:02.945405] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.524 [2024-04-17 15:35:02.945411] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.524 [2024-04-17 15:35:02.945415] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.524 [2024-04-17 15:35:02.945420] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5b6de0) on tqpair=0x56e300 00:12:01.524 [2024-04-17 15:35:02.945430] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.524 [2024-04-17 15:35:02.945434] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.524 [2024-04-17 15:35:02.945438] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56e300) 00:12:01.524 [2024-04-17 15:35:02.945446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.524 [2024-04-17 15:35:02.945462] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b6de0, cid 3, qid 0 00:12:01.524 [2024-04-17 15:35:02.945510] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.524 [2024-04-17 15:35:02.945517] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.524 [2024-04-17 15:35:02.945520] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.524 [2024-04-17 15:35:02.945525] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5b6de0) on tqpair=0x56e300 00:12:01.524 [2024-04-17 15:35:02.945535] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.524 [2024-04-17 15:35:02.945540] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.524 [2024-04-17 15:35:02.945543] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56e300) 00:12:01.524 [2024-04-17 15:35:02.945551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.524 [2024-04-17 15:35:02.945567] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b6de0, cid 3, qid 0 00:12:01.524 [2024-04-17 15:35:02.945621] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.524 [2024-04-17 15:35:02.945628] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.524 [2024-04-17 15:35:02.945632] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.524 [2024-04-17 15:35:02.945636] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5b6de0) on tqpair=0x56e300 00:12:01.524 [2024-04-17 15:35:02.945647] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.524 [2024-04-17 15:35:02.945651] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.524 [2024-04-17 15:35:02.945655] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56e300) 00:12:01.524 [2024-04-17 15:35:02.945663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.524 [2024-04-17 15:35:02.945679] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b6de0, cid 3, qid 0 00:12:01.524 [2024-04-17 15:35:02.945725] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.524 [2024-04-17 15:35:02.945731] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.524 [2024-04-17 15:35:02.945735] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.524 [2024-04-17 15:35:02.945739] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5b6de0) on tqpair=0x56e300 00:12:01.524 [2024-04-17 15:35:02.945750] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:01.524 [2024-04-17 15:35:02.945754] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:01.524 [2024-04-17 15:35:02.945758] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x56e300) 00:12:01.784 [2024-04-17 15:35:02.949774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:01.784 [2024-04-17 15:35:02.949944] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b6de0, cid 3, qid 0 00:12:01.784 [2024-04-17 15:35:02.950164] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:01.784 [2024-04-17 15:35:02.950264] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:01.784 [2024-04-17 15:35:02.950347] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:01.784 [2024-04-17 15:35:02.950385] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5b6de0) on tqpair=0x56e300 00:12:01.784 [2024-04-17 15:35:02.950496] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:12:01.784 0 Kelvin (-273 Celsius) 00:12:01.784 Available Spare: 0% 00:12:01.784 Available Spare Threshold: 0% 00:12:01.784 Life Percentage Used: 0% 00:12:01.784 Data Units Read: 0 00:12:01.784 Data Units Written: 0 00:12:01.784 Host Read Commands: 0 00:12:01.784 Host Write Commands: 0 00:12:01.784 Controller Busy Time: 0 minutes 00:12:01.784 Power Cycles: 0 00:12:01.784 Power On Hours: 0 hours 00:12:01.784 Unsafe Shutdowns: 0 00:12:01.784 Unrecoverable Media Errors: 0 00:12:01.784 Lifetime Error Log Entries: 0 00:12:01.784 Warning Temperature Time: 0 minutes 00:12:01.784 Critical Temperature Time: 0 minutes 00:12:01.784 00:12:01.784 Number of Queues 00:12:01.784 ================ 00:12:01.784 Number of I/O Submission Queues: 127 00:12:01.784 Number of I/O Completion Queues: 127 00:12:01.784 00:12:01.784 Active Namespaces 00:12:01.784 ================= 00:12:01.784 Namespace ID:1 00:12:01.784 Error Recovery Timeout: Unlimited 00:12:01.784 Command Set Identifier: NVM (00h) 00:12:01.784 Deallocate: Supported 00:12:01.784 Deallocated/Unwritten Error: Not Supported 00:12:01.784 Deallocated Read Value: Unknown 00:12:01.784 Deallocate in Write Zeroes: Not Supported 00:12:01.784 Deallocated Guard Field: 0xFFFF 00:12:01.784 Flush: Supported 00:12:01.784 Reservation: Supported 00:12:01.784 Namespace Sharing Capabilities: Multiple Controllers 00:12:01.784 Size (in LBAs): 131072 (0GiB) 00:12:01.784 Capacity (in LBAs): 131072 (0GiB) 00:12:01.784 Utilization (in LBAs): 131072 (0GiB) 00:12:01.784 NGUID: ABCDEF0123456789ABCDEF0123456789 00:12:01.784 EUI64: ABCDEF0123456789 00:12:01.784 UUID: 352db8a6-2b44-4998-b658-90d987ef521d 00:12:01.784 Thin Provisioning: Not Supported 00:12:01.784 Per-NS Atomic Units: Yes 00:12:01.784 Atomic Boundary Size (Normal): 0 00:12:01.784 Atomic Boundary Size (PFail): 0 00:12:01.784 Atomic Boundary Offset: 0 00:12:01.784 Maximum Single Source Range Length: 65535 00:12:01.784 Maximum Copy Length: 65535 00:12:01.784 Maximum Source Range Count: 1 00:12:01.784 NGUID/EUI64 Never Reused: No 00:12:01.784 Namespace Write Protected: No 00:12:01.784 Number of LBA Formats: 1 00:12:01.784 Current LBA Format: LBA Format #00 00:12:01.784 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:01.784 00:12:01.784 15:35:02 -- host/identify.sh@51 -- # sync 00:12:01.784 15:35:03 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:01.784 15:35:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:01.784 15:35:03 -- common/autotest_common.sh@10 -- # set +x 00:12:01.784 15:35:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:01.784 15:35:03 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:12:01.784 15:35:03 -- host/identify.sh@56 -- # nvmftestfini 00:12:01.784 15:35:03 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:01.784 15:35:03 -- nvmf/common.sh@117 -- # sync 00:12:01.784 15:35:03 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:01.784 15:35:03 -- nvmf/common.sh@120 -- # set +e 00:12:01.784 15:35:03 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:01.784 15:35:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:01.784 rmmod nvme_tcp 00:12:01.784 rmmod nvme_fabrics 00:12:01.784 rmmod nvme_keyring 00:12:01.784 15:35:03 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:01.784 15:35:03 -- nvmf/common.sh@124 -- # set -e 00:12:01.784 15:35:03 -- nvmf/common.sh@125 -- # return 0 00:12:01.784 15:35:03 -- nvmf/common.sh@478 -- # '[' -n 71636 ']' 00:12:01.784 15:35:03 -- nvmf/common.sh@479 -- # killprocess 71636 00:12:01.784 15:35:03 -- common/autotest_common.sh@936 -- # '[' -z 71636 ']' 00:12:01.784 15:35:03 -- common/autotest_common.sh@940 -- # kill -0 71636 00:12:01.784 15:35:03 -- common/autotest_common.sh@941 -- # uname 00:12:01.784 15:35:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:01.784 15:35:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71636 00:12:01.784 killing process with pid 71636 00:12:01.784 15:35:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:01.784 15:35:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:01.784 15:35:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71636' 00:12:01.784 15:35:03 -- common/autotest_common.sh@955 -- # kill 71636 00:12:01.784 [2024-04-17 15:35:03.109315] app.c: 930:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:12:01.784 15:35:03 -- common/autotest_common.sh@960 -- # wait 71636 00:12:02.043 15:35:03 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:02.043 15:35:03 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:02.043 15:35:03 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:02.043 15:35:03 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:02.043 15:35:03 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:02.043 15:35:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.043 15:35:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:02.043 15:35:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.302 15:35:03 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:02.302 ************************************ 00:12:02.302 END TEST nvmf_identify 00:12:02.302 ************************************ 00:12:02.302 00:12:02.302 real 0m2.647s 00:12:02.302 user 0m7.159s 00:12:02.302 sys 0m0.705s 00:12:02.302 15:35:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:02.302 15:35:03 -- common/autotest_common.sh@10 -- # set +x 00:12:02.302 15:35:03 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:12:02.302 15:35:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:02.302 15:35:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:02.302 15:35:03 -- common/autotest_common.sh@10 -- # set +x 00:12:02.302 ************************************ 00:12:02.302 START TEST nvmf_perf 00:12:02.302 ************************************ 00:12:02.302 15:35:03 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:12:02.302 * Looking for test storage... 00:12:02.302 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:12:02.302 15:35:03 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:02.302 15:35:03 -- nvmf/common.sh@7 -- # uname -s 00:12:02.302 15:35:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:02.302 15:35:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:02.302 15:35:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:02.302 15:35:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:02.302 15:35:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:02.302 15:35:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:02.302 15:35:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:02.302 15:35:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:02.302 15:35:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:02.302 15:35:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:02.302 15:35:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02dfa913-00e4-4a25-ab2c-855f7283d4db 00:12:02.302 15:35:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=02dfa913-00e4-4a25-ab2c-855f7283d4db 00:12:02.302 15:35:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:02.302 15:35:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:02.302 15:35:03 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:02.302 15:35:03 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:02.302 15:35:03 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:02.302 15:35:03 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:02.302 15:35:03 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:02.302 15:35:03 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:02.302 15:35:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.302 15:35:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.302 15:35:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.302 15:35:03 -- paths/export.sh@5 -- # export PATH 00:12:02.302 15:35:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.302 15:35:03 -- nvmf/common.sh@47 -- # : 0 00:12:02.302 15:35:03 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:02.302 15:35:03 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:02.302 15:35:03 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:02.302 15:35:03 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:02.303 15:35:03 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:02.303 15:35:03 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:02.303 15:35:03 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:02.303 15:35:03 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:02.303 15:35:03 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:02.303 15:35:03 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:02.303 15:35:03 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:02.303 15:35:03 -- host/perf.sh@17 -- # nvmftestinit 00:12:02.303 15:35:03 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:02.303 15:35:03 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:02.303 15:35:03 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:02.303 15:35:03 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:02.303 15:35:03 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:02.303 15:35:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.303 15:35:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:02.303 15:35:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.303 15:35:03 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:12:02.303 15:35:03 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:12:02.303 15:35:03 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:12:02.303 15:35:03 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:12:02.303 15:35:03 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:12:02.303 15:35:03 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:12:02.303 15:35:03 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:02.303 15:35:03 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:02.303 15:35:03 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:02.303 15:35:03 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:02.303 15:35:03 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:02.303 15:35:03 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:02.303 15:35:03 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:02.303 15:35:03 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:02.303 15:35:03 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:02.303 15:35:03 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:02.303 15:35:03 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:02.303 15:35:03 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:02.303 15:35:03 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:02.561 15:35:03 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:02.561 Cannot find device "nvmf_tgt_br" 00:12:02.561 15:35:03 -- nvmf/common.sh@155 -- # true 00:12:02.561 15:35:03 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:02.561 Cannot find device "nvmf_tgt_br2" 00:12:02.561 15:35:03 -- nvmf/common.sh@156 -- # true 00:12:02.561 15:35:03 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:02.561 15:35:03 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:02.561 Cannot find device "nvmf_tgt_br" 00:12:02.561 15:35:03 -- nvmf/common.sh@158 -- # true 00:12:02.561 15:35:03 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:02.561 Cannot find device "nvmf_tgt_br2" 00:12:02.561 15:35:03 -- nvmf/common.sh@159 -- # true 00:12:02.561 15:35:03 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:02.561 15:35:03 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:02.561 15:35:03 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:02.561 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:02.561 15:35:03 -- nvmf/common.sh@162 -- # true 00:12:02.561 15:35:03 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:02.561 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:02.561 15:35:03 -- nvmf/common.sh@163 -- # true 00:12:02.561 15:35:03 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:02.561 15:35:03 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:02.561 15:35:03 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:02.561 15:35:03 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:02.561 15:35:03 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:02.561 15:35:03 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:02.561 15:35:03 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:02.561 15:35:03 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:02.561 15:35:03 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:02.561 15:35:03 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:02.561 15:35:03 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:02.561 15:35:03 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:02.561 15:35:03 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:02.561 15:35:03 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:02.561 15:35:03 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:02.561 15:35:03 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:02.562 15:35:03 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:02.562 15:35:03 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:02.562 15:35:03 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:02.562 15:35:03 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:02.820 15:35:04 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:02.820 15:35:04 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:02.820 15:35:04 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:02.820 15:35:04 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:02.820 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:02.820 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:12:02.820 00:12:02.820 --- 10.0.0.2 ping statistics --- 00:12:02.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.820 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:12:02.820 15:35:04 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:02.820 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:02.820 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:12:02.820 00:12:02.820 --- 10.0.0.3 ping statistics --- 00:12:02.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.820 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:12:02.820 15:35:04 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:02.820 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:02.820 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:12:02.820 00:12:02.820 --- 10.0.0.1 ping statistics --- 00:12:02.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.820 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:12:02.820 15:35:04 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:02.820 15:35:04 -- nvmf/common.sh@422 -- # return 0 00:12:02.820 15:35:04 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:02.820 15:35:04 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:02.820 15:35:04 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:02.820 15:35:04 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:02.820 15:35:04 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:02.820 15:35:04 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:02.820 15:35:04 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:02.820 15:35:04 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:12:02.820 15:35:04 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:02.820 15:35:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:02.820 15:35:04 -- common/autotest_common.sh@10 -- # set +x 00:12:02.820 15:35:04 -- nvmf/common.sh@470 -- # nvmfpid=71848 00:12:02.820 15:35:04 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:02.820 15:35:04 -- nvmf/common.sh@471 -- # waitforlisten 71848 00:12:02.820 15:35:04 -- common/autotest_common.sh@817 -- # '[' -z 71848 ']' 00:12:02.820 15:35:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.820 15:35:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:02.820 15:35:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.820 15:35:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:02.820 15:35:04 -- common/autotest_common.sh@10 -- # set +x 00:12:02.820 [2024-04-17 15:35:04.131513] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:12:02.820 [2024-04-17 15:35:04.131606] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:03.079 [2024-04-17 15:35:04.275179] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:03.079 [2024-04-17 15:35:04.407566] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:03.079 [2024-04-17 15:35:04.407848] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:03.079 [2024-04-17 15:35:04.408015] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:03.079 [2024-04-17 15:35:04.408143] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:03.079 [2024-04-17 15:35:04.408191] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:03.079 [2024-04-17 15:35:04.408431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:03.079 [2024-04-17 15:35:04.408509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:03.079 [2024-04-17 15:35:04.408564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:03.079 [2024-04-17 15:35:04.408567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.645 15:35:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:03.645 15:35:05 -- common/autotest_common.sh@850 -- # return 0 00:12:03.645 15:35:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:03.645 15:35:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:03.645 15:35:05 -- common/autotest_common.sh@10 -- # set +x 00:12:03.905 15:35:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:03.905 15:35:05 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:12:03.905 15:35:05 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:04.163 15:35:05 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:12:04.163 15:35:05 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:12:04.422 15:35:05 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:12:04.422 15:35:05 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:04.681 15:35:06 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:12:04.681 15:35:06 -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:12:04.681 15:35:06 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:12:04.681 15:35:06 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:12:04.681 15:35:06 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:04.938 [2024-04-17 15:35:06.283359] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:04.938 15:35:06 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:05.196 15:35:06 -- host/perf.sh@45 -- # for bdev in $bdevs 00:12:05.196 15:35:06 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:05.454 15:35:06 -- host/perf.sh@45 -- # for bdev in $bdevs 00:12:05.454 15:35:06 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:12:05.714 15:35:07 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:05.973 [2024-04-17 15:35:07.250168] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:05.973 15:35:07 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:06.232 15:35:07 -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:12:06.232 15:35:07 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:12:06.232 15:35:07 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:12:06.232 15:35:07 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:12:07.609 Initializing NVMe Controllers 00:12:07.609 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:07.609 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:12:07.609 Initialization complete. Launching workers. 00:12:07.609 ======================================================== 00:12:07.609 Latency(us) 00:12:07.609 Device Information : IOPS MiB/s Average min max 00:12:07.609 PCIE (0000:00:10.0) NSID 1 from core 0: 21952.00 85.75 1457.40 353.09 7714.82 00:12:07.609 ======================================================== 00:12:07.609 Total : 21952.00 85.75 1457.40 353.09 7714.82 00:12:07.609 00:12:07.609 15:35:08 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:12:08.546 Initializing NVMe Controllers 00:12:08.546 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:08.546 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:08.546 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:12:08.546 Initialization complete. Launching workers. 00:12:08.546 ======================================================== 00:12:08.546 Latency(us) 00:12:08.546 Device Information : IOPS MiB/s Average min max 00:12:08.546 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3298.94 12.89 302.79 108.17 7291.69 00:12:08.546 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 122.51 0.48 8218.83 6007.42 12068.51 00:12:08.546 ======================================================== 00:12:08.546 Total : 3421.45 13.37 586.25 108.17 12068.51 00:12:08.546 00:12:08.546 15:35:09 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:12:09.923 Initializing NVMe Controllers 00:12:09.923 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:09.923 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:09.923 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:12:09.923 Initialization complete. Launching workers. 00:12:09.923 ======================================================== 00:12:09.923 Latency(us) 00:12:09.923 Device Information : IOPS MiB/s Average min max 00:12:09.923 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8452.69 33.02 3785.70 453.05 9373.61 00:12:09.923 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3611.74 14.11 8885.20 6157.19 19051.83 00:12:09.923 ======================================================== 00:12:09.923 Total : 12064.43 47.13 5312.35 453.05 19051.83 00:12:09.923 00:12:09.923 15:35:11 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:12:09.923 15:35:11 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:12:12.455 Initializing NVMe Controllers 00:12:12.455 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:12.455 Controller IO queue size 128, less than required. 00:12:12.455 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:12.455 Controller IO queue size 128, less than required. 00:12:12.455 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:12.455 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:12.455 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:12:12.455 Initialization complete. Launching workers. 00:12:12.455 ======================================================== 00:12:12.455 Latency(us) 00:12:12.455 Device Information : IOPS MiB/s Average min max 00:12:12.455 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1493.16 373.29 87466.64 48612.29 159922.66 00:12:12.455 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 611.34 152.83 217951.84 91892.98 339521.62 00:12:12.455 ======================================================== 00:12:12.455 Total : 2104.50 526.13 125371.40 48612.29 339521.62 00:12:12.455 00:12:12.455 15:35:13 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:12:12.713 No valid NVMe controllers or AIO or URING devices found 00:12:12.713 Initializing NVMe Controllers 00:12:12.713 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:12.713 Controller IO queue size 128, less than required. 00:12:12.713 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:12.713 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:12:12.713 Controller IO queue size 128, less than required. 00:12:12.713 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:12.713 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:12:12.713 WARNING: Some requested NVMe devices were skipped 00:12:12.713 15:35:13 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:12:15.243 Initializing NVMe Controllers 00:12:15.243 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:15.243 Controller IO queue size 128, less than required. 00:12:15.243 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:15.243 Controller IO queue size 128, less than required. 00:12:15.243 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:15.243 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:15.243 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:12:15.243 Initialization complete. Launching workers. 00:12:15.243 00:12:15.243 ==================== 00:12:15.243 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:12:15.243 TCP transport: 00:12:15.243 polls: 5972 00:12:15.243 idle_polls: 0 00:12:15.243 sock_completions: 5972 00:12:15.243 nvme_completions: 5653 00:12:15.243 submitted_requests: 8486 00:12:15.243 queued_requests: 1 00:12:15.243 00:12:15.243 ==================== 00:12:15.243 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:12:15.243 TCP transport: 00:12:15.243 polls: 6868 00:12:15.243 idle_polls: 0 00:12:15.243 sock_completions: 6868 00:12:15.243 nvme_completions: 5911 00:12:15.243 submitted_requests: 8840 00:12:15.243 queued_requests: 1 00:12:15.243 ======================================================== 00:12:15.243 Latency(us) 00:12:15.243 Device Information : IOPS MiB/s Average min max 00:12:15.243 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1412.73 353.18 91998.14 50941.23 169063.23 00:12:15.243 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1477.22 369.31 88062.91 45026.62 131190.60 00:12:15.243 ======================================================== 00:12:15.243 Total : 2889.95 722.49 89986.62 45026.62 169063.23 00:12:15.243 00:12:15.243 15:35:16 -- host/perf.sh@66 -- # sync 00:12:15.243 15:35:16 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:15.501 15:35:16 -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:12:15.501 15:35:16 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:12:15.501 15:35:16 -- host/perf.sh@114 -- # nvmftestfini 00:12:15.501 15:35:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:15.501 15:35:16 -- nvmf/common.sh@117 -- # sync 00:12:15.501 15:35:16 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:15.501 15:35:16 -- nvmf/common.sh@120 -- # set +e 00:12:15.501 15:35:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:15.501 15:35:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:15.501 rmmod nvme_tcp 00:12:15.501 rmmod nvme_fabrics 00:12:15.501 rmmod nvme_keyring 00:12:15.501 15:35:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:15.501 15:35:16 -- nvmf/common.sh@124 -- # set -e 00:12:15.501 15:35:16 -- nvmf/common.sh@125 -- # return 0 00:12:15.501 15:35:16 -- nvmf/common.sh@478 -- # '[' -n 71848 ']' 00:12:15.501 15:35:16 -- nvmf/common.sh@479 -- # killprocess 71848 00:12:15.501 15:35:16 -- common/autotest_common.sh@936 -- # '[' -z 71848 ']' 00:12:15.501 15:35:16 -- common/autotest_common.sh@940 -- # kill -0 71848 00:12:15.501 15:35:16 -- common/autotest_common.sh@941 -- # uname 00:12:15.501 15:35:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:15.501 15:35:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71848 00:12:15.501 killing process with pid 71848 00:12:15.501 15:35:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:15.501 15:35:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:15.501 15:35:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71848' 00:12:15.501 15:35:16 -- common/autotest_common.sh@955 -- # kill 71848 00:12:15.501 15:35:16 -- common/autotest_common.sh@960 -- # wait 71848 00:12:16.437 15:35:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:16.437 15:35:17 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:16.437 15:35:17 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:16.437 15:35:17 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:16.437 15:35:17 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:16.437 15:35:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.437 15:35:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:16.437 15:35:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:16.437 15:35:17 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:16.437 00:12:16.437 real 0m14.122s 00:12:16.437 user 0m51.415s 00:12:16.437 sys 0m4.257s 00:12:16.437 15:35:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:16.437 15:35:17 -- common/autotest_common.sh@10 -- # set +x 00:12:16.437 ************************************ 00:12:16.438 END TEST nvmf_perf 00:12:16.438 ************************************ 00:12:16.438 15:35:17 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:12:16.438 15:35:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:16.438 15:35:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:16.438 15:35:17 -- common/autotest_common.sh@10 -- # set +x 00:12:16.438 ************************************ 00:12:16.438 START TEST nvmf_fio_host 00:12:16.438 ************************************ 00:12:16.438 15:35:17 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:12:16.697 * Looking for test storage... 00:12:16.697 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:12:16.697 15:35:17 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:16.697 15:35:17 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:16.697 15:35:17 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:16.697 15:35:17 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:16.697 15:35:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.697 15:35:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.697 15:35:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.697 15:35:17 -- paths/export.sh@5 -- # export PATH 00:12:16.697 15:35:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.697 15:35:17 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:16.697 15:35:17 -- nvmf/common.sh@7 -- # uname -s 00:12:16.697 15:35:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:16.697 15:35:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:16.697 15:35:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:16.697 15:35:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:16.697 15:35:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:16.697 15:35:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:16.697 15:35:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:16.697 15:35:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:16.697 15:35:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:16.697 15:35:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:16.697 15:35:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02dfa913-00e4-4a25-ab2c-855f7283d4db 00:12:16.697 15:35:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=02dfa913-00e4-4a25-ab2c-855f7283d4db 00:12:16.697 15:35:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:16.697 15:35:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:16.697 15:35:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:16.697 15:35:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:16.697 15:35:17 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:16.697 15:35:17 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:16.697 15:35:17 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:16.697 15:35:17 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:16.697 15:35:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.697 15:35:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.698 15:35:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.698 15:35:17 -- paths/export.sh@5 -- # export PATH 00:12:16.698 15:35:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.698 15:35:17 -- nvmf/common.sh@47 -- # : 0 00:12:16.698 15:35:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:16.698 15:35:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:16.698 15:35:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:16.698 15:35:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:16.698 15:35:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:16.698 15:35:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:16.698 15:35:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:16.698 15:35:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:16.698 15:35:17 -- host/fio.sh@12 -- # nvmftestinit 00:12:16.698 15:35:17 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:16.698 15:35:17 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:16.698 15:35:17 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:16.698 15:35:17 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:16.698 15:35:17 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:16.698 15:35:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.698 15:35:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:16.698 15:35:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:16.698 15:35:17 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:12:16.698 15:35:17 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:12:16.698 15:35:17 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:12:16.698 15:35:17 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:12:16.698 15:35:17 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:12:16.698 15:35:17 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:12:16.698 15:35:17 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:16.698 15:35:17 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:16.698 15:35:17 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:16.698 15:35:17 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:16.698 15:35:17 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:16.698 15:35:17 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:16.698 15:35:17 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:16.698 15:35:17 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:16.698 15:35:17 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:16.698 15:35:17 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:16.698 15:35:17 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:16.698 15:35:17 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:16.698 15:35:17 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:16.698 15:35:17 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:16.698 Cannot find device "nvmf_tgt_br" 00:12:16.698 15:35:18 -- nvmf/common.sh@155 -- # true 00:12:16.698 15:35:18 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:16.698 Cannot find device "nvmf_tgt_br2" 00:12:16.698 15:35:18 -- nvmf/common.sh@156 -- # true 00:12:16.698 15:35:18 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:16.698 15:35:18 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:16.698 Cannot find device "nvmf_tgt_br" 00:12:16.698 15:35:18 -- nvmf/common.sh@158 -- # true 00:12:16.698 15:35:18 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:16.698 Cannot find device "nvmf_tgt_br2" 00:12:16.698 15:35:18 -- nvmf/common.sh@159 -- # true 00:12:16.698 15:35:18 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:16.698 15:35:18 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:16.698 15:35:18 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:16.698 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:16.698 15:35:18 -- nvmf/common.sh@162 -- # true 00:12:16.698 15:35:18 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:16.698 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:16.698 15:35:18 -- nvmf/common.sh@163 -- # true 00:12:16.698 15:35:18 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:16.698 15:35:18 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:16.698 15:35:18 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:16.698 15:35:18 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:16.698 15:35:18 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:16.958 15:35:18 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:16.958 15:35:18 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:16.958 15:35:18 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:16.958 15:35:18 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:16.958 15:35:18 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:16.958 15:35:18 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:16.958 15:35:18 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:16.958 15:35:18 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:16.958 15:35:18 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:16.958 15:35:18 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:16.958 15:35:18 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:16.958 15:35:18 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:16.958 15:35:18 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:16.958 15:35:18 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:16.958 15:35:18 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:16.958 15:35:18 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:16.958 15:35:18 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:16.958 15:35:18 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:16.958 15:35:18 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:16.958 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:16.958 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:12:16.958 00:12:16.958 --- 10.0.0.2 ping statistics --- 00:12:16.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.958 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:12:16.958 15:35:18 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:16.958 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:16.958 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:12:16.958 00:12:16.958 --- 10.0.0.3 ping statistics --- 00:12:16.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.958 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:12:16.958 15:35:18 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:16.958 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:16.958 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:12:16.958 00:12:16.958 --- 10.0.0.1 ping statistics --- 00:12:16.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.958 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:12:16.958 15:35:18 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:16.958 15:35:18 -- nvmf/common.sh@422 -- # return 0 00:12:16.958 15:35:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:16.958 15:35:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:16.958 15:35:18 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:16.958 15:35:18 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:16.958 15:35:18 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:16.958 15:35:18 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:16.958 15:35:18 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:16.958 15:35:18 -- host/fio.sh@14 -- # [[ y != y ]] 00:12:16.958 15:35:18 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:12:16.958 15:35:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:16.958 15:35:18 -- common/autotest_common.sh@10 -- # set +x 00:12:16.958 15:35:18 -- host/fio.sh@22 -- # nvmfpid=72258 00:12:16.958 15:35:18 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:16.958 15:35:18 -- host/fio.sh@21 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:16.958 15:35:18 -- host/fio.sh@26 -- # waitforlisten 72258 00:12:16.958 15:35:18 -- common/autotest_common.sh@817 -- # '[' -z 72258 ']' 00:12:16.958 15:35:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.958 15:35:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:16.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.958 15:35:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.958 15:35:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:16.958 15:35:18 -- common/autotest_common.sh@10 -- # set +x 00:12:16.958 [2024-04-17 15:35:18.355876] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:12:16.959 [2024-04-17 15:35:18.355984] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:17.217 [2024-04-17 15:35:18.500719] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:17.477 [2024-04-17 15:35:18.662188] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:17.477 [2024-04-17 15:35:18.662280] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:17.477 [2024-04-17 15:35:18.662294] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:17.477 [2024-04-17 15:35:18.662305] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:17.477 [2024-04-17 15:35:18.662314] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:17.477 [2024-04-17 15:35:18.662491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:17.477 [2024-04-17 15:35:18.663522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:17.477 [2024-04-17 15:35:18.663676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:17.477 [2024-04-17 15:35:18.663786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.045 15:35:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:18.045 15:35:19 -- common/autotest_common.sh@850 -- # return 0 00:12:18.045 15:35:19 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:18.045 15:35:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:18.045 15:35:19 -- common/autotest_common.sh@10 -- # set +x 00:12:18.045 [2024-04-17 15:35:19.371149] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:18.045 15:35:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:18.045 15:35:19 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:12:18.045 15:35:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:18.045 15:35:19 -- common/autotest_common.sh@10 -- # set +x 00:12:18.045 15:35:19 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:18.045 15:35:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:18.045 15:35:19 -- common/autotest_common.sh@10 -- # set +x 00:12:18.045 Malloc1 00:12:18.045 15:35:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:18.045 15:35:19 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:18.045 15:35:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:18.045 15:35:19 -- common/autotest_common.sh@10 -- # set +x 00:12:18.045 15:35:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:18.045 15:35:19 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:18.045 15:35:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:18.045 15:35:19 -- common/autotest_common.sh@10 -- # set +x 00:12:18.045 15:35:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:18.045 15:35:19 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:18.045 15:35:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:18.045 15:35:19 -- common/autotest_common.sh@10 -- # set +x 00:12:18.045 [2024-04-17 15:35:19.479322] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:18.045 15:35:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:18.045 15:35:19 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:18.045 15:35:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:18.045 15:35:19 -- common/autotest_common.sh@10 -- # set +x 00:12:18.304 15:35:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:18.304 15:35:19 -- host/fio.sh@36 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:12:18.304 15:35:19 -- host/fio.sh@39 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:12:18.304 15:35:19 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:12:18.304 15:35:19 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:12:18.304 15:35:19 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:18.304 15:35:19 -- common/autotest_common.sh@1325 -- # local sanitizers 00:12:18.304 15:35:19 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:18.304 15:35:19 -- common/autotest_common.sh@1327 -- # shift 00:12:18.304 15:35:19 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:12:18.304 15:35:19 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:12:18.304 15:35:19 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:18.304 15:35:19 -- common/autotest_common.sh@1331 -- # grep libasan 00:12:18.304 15:35:19 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:12:18.304 15:35:19 -- common/autotest_common.sh@1331 -- # asan_lib= 00:12:18.304 15:35:19 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:12:18.304 15:35:19 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:12:18.304 15:35:19 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:18.304 15:35:19 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:12:18.304 15:35:19 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:12:18.304 15:35:19 -- common/autotest_common.sh@1331 -- # asan_lib= 00:12:18.304 15:35:19 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:12:18.304 15:35:19 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:18.304 15:35:19 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:12:18.304 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:18.304 fio-3.35 00:12:18.304 Starting 1 thread 00:12:20.837 00:12:20.837 test: (groupid=0, jobs=1): err= 0: pid=72319: Wed Apr 17 15:35:21 2024 00:12:20.837 read: IOPS=8447, BW=33.0MiB/s (34.6MB/s)(66.2MiB/2007msec) 00:12:20.837 slat (usec): min=2, max=335, avg= 2.79, stdev= 3.63 00:12:20.837 clat (usec): min=2601, max=14114, avg=7881.57, stdev=565.70 00:12:20.837 lat (usec): min=2644, max=14117, avg=7884.36, stdev=565.42 00:12:20.837 clat percentiles (usec): 00:12:20.837 | 1.00th=[ 6718], 5.00th=[ 7111], 10.00th=[ 7242], 20.00th=[ 7504], 00:12:20.837 | 30.00th=[ 7635], 40.00th=[ 7767], 50.00th=[ 7898], 60.00th=[ 7963], 00:12:20.837 | 70.00th=[ 8094], 80.00th=[ 8291], 90.00th=[ 8455], 95.00th=[ 8717], 00:12:20.837 | 99.00th=[ 9110], 99.50th=[ 9503], 99.90th=[13173], 99.95th=[13566], 00:12:20.837 | 99.99th=[14091] 00:12:20.837 bw ( KiB/s): min=33464, max=34112, per=99.93%, avg=33765.00, stdev=318.15, samples=4 00:12:20.837 iops : min= 8366, max= 8528, avg=8441.25, stdev=79.54, samples=4 00:12:20.837 write: IOPS=8446, BW=33.0MiB/s (34.6MB/s)(66.2MiB/2007msec); 0 zone resets 00:12:20.837 slat (usec): min=2, max=884, avg= 2.88, stdev= 7.19 00:12:20.837 clat (usec): min=2432, max=13857, avg=7202.66, stdev=514.88 00:12:20.837 lat (usec): min=2476, max=13859, avg=7205.54, stdev=514.64 00:12:20.837 clat percentiles (usec): 00:12:20.837 | 1.00th=[ 6063], 5.00th=[ 6521], 10.00th=[ 6652], 20.00th=[ 6849], 00:12:20.837 | 30.00th=[ 6980], 40.00th=[ 7111], 50.00th=[ 7177], 60.00th=[ 7308], 00:12:20.837 | 70.00th=[ 7373], 80.00th=[ 7570], 90.00th=[ 7767], 95.00th=[ 7898], 00:12:20.837 | 99.00th=[ 8356], 99.50th=[ 9110], 99.90th=[11469], 99.95th=[12518], 00:12:20.837 | 99.99th=[13829] 00:12:20.837 bw ( KiB/s): min=33440, max=34344, per=99.92%, avg=33761.00, stdev=419.20, samples=4 00:12:20.837 iops : min= 8360, max= 8586, avg=8440.25, stdev=104.80, samples=4 00:12:20.837 lat (msec) : 4=0.08%, 10=99.64%, 20=0.28% 00:12:20.837 cpu : usr=67.45%, sys=24.58%, ctx=24, majf=0, minf=6 00:12:20.837 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:12:20.837 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:20.837 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:20.837 issued rwts: total=16954,16953,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:20.837 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:20.837 00:12:20.837 Run status group 0 (all jobs): 00:12:20.837 READ: bw=33.0MiB/s (34.6MB/s), 33.0MiB/s-33.0MiB/s (34.6MB/s-34.6MB/s), io=66.2MiB (69.4MB), run=2007-2007msec 00:12:20.837 WRITE: bw=33.0MiB/s (34.6MB/s), 33.0MiB/s-33.0MiB/s (34.6MB/s-34.6MB/s), io=66.2MiB (69.4MB), run=2007-2007msec 00:12:20.837 15:35:21 -- host/fio.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:12:20.837 15:35:21 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:12:20.837 15:35:21 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:12:20.837 15:35:21 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:20.837 15:35:21 -- common/autotest_common.sh@1325 -- # local sanitizers 00:12:20.837 15:35:21 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:20.837 15:35:21 -- common/autotest_common.sh@1327 -- # shift 00:12:20.837 15:35:21 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:12:20.837 15:35:21 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:12:20.837 15:35:21 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:20.837 15:35:21 -- common/autotest_common.sh@1331 -- # grep libasan 00:12:20.837 15:35:21 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:12:20.837 15:35:21 -- common/autotest_common.sh@1331 -- # asan_lib= 00:12:20.837 15:35:21 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:12:20.837 15:35:21 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:12:20.837 15:35:21 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:20.837 15:35:21 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:12:20.837 15:35:21 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:12:20.837 15:35:22 -- common/autotest_common.sh@1331 -- # asan_lib= 00:12:20.838 15:35:22 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:12:20.838 15:35:22 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:20.838 15:35:22 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:12:20.838 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:12:20.838 fio-3.35 00:12:20.838 Starting 1 thread 00:12:23.396 00:12:23.396 test: (groupid=0, jobs=1): err= 0: pid=72362: Wed Apr 17 15:35:24 2024 00:12:23.396 read: IOPS=7722, BW=121MiB/s (127MB/s)(243MiB/2011msec) 00:12:23.396 slat (usec): min=3, max=116, avg= 3.75, stdev= 1.91 00:12:23.396 clat (usec): min=2219, max=22811, avg=9460.91, stdev=2841.91 00:12:23.396 lat (usec): min=2222, max=22815, avg=9464.66, stdev=2841.93 00:12:23.396 clat percentiles (usec): 00:12:23.396 | 1.00th=[ 4293], 5.00th=[ 5211], 10.00th=[ 5735], 20.00th=[ 6915], 00:12:23.396 | 30.00th=[ 7832], 40.00th=[ 8586], 50.00th=[ 9241], 60.00th=[10028], 00:12:23.396 | 70.00th=[10814], 80.00th=[11731], 90.00th=[13173], 95.00th=[14615], 00:12:23.396 | 99.00th=[17171], 99.50th=[17695], 99.90th=[18744], 99.95th=[19006], 00:12:23.396 | 99.99th=[21627] 00:12:23.396 bw ( KiB/s): min=57792, max=67200, per=50.74%, avg=62696.00, stdev=3919.13, samples=4 00:12:23.396 iops : min= 3612, max= 4200, avg=3918.50, stdev=244.95, samples=4 00:12:23.396 write: IOPS=4362, BW=68.2MiB/s (71.5MB/s)(128MiB/1881msec); 0 zone resets 00:12:23.396 slat (usec): min=33, max=397, avg=38.54, stdev= 8.16 00:12:23.396 clat (usec): min=7412, max=22222, avg=12754.86, stdev=2558.82 00:12:23.396 lat (usec): min=7449, max=22266, avg=12793.39, stdev=2558.48 00:12:23.396 clat percentiles (usec): 00:12:23.396 | 1.00th=[ 8291], 5.00th=[ 9241], 10.00th=[ 9896], 20.00th=[10552], 00:12:23.396 | 30.00th=[11207], 40.00th=[11731], 50.00th=[12256], 60.00th=[13042], 00:12:23.396 | 70.00th=[13960], 80.00th=[14877], 90.00th=[16188], 95.00th=[17695], 00:12:23.396 | 99.00th=[19792], 99.50th=[20317], 99.90th=[21627], 99.95th=[21627], 00:12:23.396 | 99.99th=[22152] 00:12:23.396 bw ( KiB/s): min=60864, max=69696, per=93.67%, avg=65384.00, stdev=3981.86, samples=4 00:12:23.396 iops : min= 3804, max= 4356, avg=4086.50, stdev=248.87, samples=4 00:12:23.396 lat (msec) : 4=0.34%, 10=42.59%, 20=56.79%, 50=0.28% 00:12:23.396 cpu : usr=81.44%, sys=14.98%, ctx=4, majf=0, minf=19 00:12:23.396 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:12:23.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:23.396 issued rwts: total=15530,8206,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:23.396 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:23.396 00:12:23.396 Run status group 0 (all jobs): 00:12:23.396 READ: bw=121MiB/s (127MB/s), 121MiB/s-121MiB/s (127MB/s-127MB/s), io=243MiB (254MB), run=2011-2011msec 00:12:23.396 WRITE: bw=68.2MiB/s (71.5MB/s), 68.2MiB/s-68.2MiB/s (71.5MB/s-71.5MB/s), io=128MiB (134MB), run=1881-1881msec 00:12:23.396 15:35:24 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:23.396 15:35:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:23.396 15:35:24 -- common/autotest_common.sh@10 -- # set +x 00:12:23.396 15:35:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:23.396 15:35:24 -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:12:23.396 15:35:24 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:12:23.396 15:35:24 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:12:23.396 15:35:24 -- host/fio.sh@84 -- # nvmftestfini 00:12:23.396 15:35:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:23.396 15:35:24 -- nvmf/common.sh@117 -- # sync 00:12:23.396 15:35:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:23.396 15:35:24 -- nvmf/common.sh@120 -- # set +e 00:12:23.396 15:35:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:23.396 15:35:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:23.396 rmmod nvme_tcp 00:12:23.396 rmmod nvme_fabrics 00:12:23.396 rmmod nvme_keyring 00:12:23.396 15:35:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:23.396 15:35:24 -- nvmf/common.sh@124 -- # set -e 00:12:23.396 15:35:24 -- nvmf/common.sh@125 -- # return 0 00:12:23.396 15:35:24 -- nvmf/common.sh@478 -- # '[' -n 72258 ']' 00:12:23.396 15:35:24 -- nvmf/common.sh@479 -- # killprocess 72258 00:12:23.396 15:35:24 -- common/autotest_common.sh@936 -- # '[' -z 72258 ']' 00:12:23.396 15:35:24 -- common/autotest_common.sh@940 -- # kill -0 72258 00:12:23.396 15:35:24 -- common/autotest_common.sh@941 -- # uname 00:12:23.396 15:35:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:23.396 15:35:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72258 00:12:23.396 killing process with pid 72258 00:12:23.396 15:35:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:23.396 15:35:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:23.396 15:35:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72258' 00:12:23.396 15:35:24 -- common/autotest_common.sh@955 -- # kill 72258 00:12:23.396 15:35:24 -- common/autotest_common.sh@960 -- # wait 72258 00:12:23.655 15:35:24 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:23.655 15:35:24 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:23.655 15:35:24 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:23.655 15:35:24 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:23.655 15:35:24 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:23.655 15:35:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.655 15:35:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:23.655 15:35:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.655 15:35:25 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:23.655 00:12:23.655 real 0m7.174s 00:12:23.655 user 0m27.613s 00:12:23.655 sys 0m2.283s 00:12:23.655 15:35:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:23.655 ************************************ 00:12:23.655 END TEST nvmf_fio_host 00:12:23.655 ************************************ 00:12:23.655 15:35:25 -- common/autotest_common.sh@10 -- # set +x 00:12:23.655 15:35:25 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:12:23.655 15:35:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:23.655 15:35:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:23.655 15:35:25 -- common/autotest_common.sh@10 -- # set +x 00:12:23.913 ************************************ 00:12:23.913 START TEST nvmf_failover 00:12:23.913 ************************************ 00:12:23.913 15:35:25 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:12:23.913 * Looking for test storage... 00:12:23.913 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:12:23.913 15:35:25 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:23.913 15:35:25 -- nvmf/common.sh@7 -- # uname -s 00:12:23.913 15:35:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:23.913 15:35:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:23.913 15:35:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:23.913 15:35:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:23.913 15:35:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:23.913 15:35:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:23.913 15:35:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:23.913 15:35:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:23.913 15:35:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:23.913 15:35:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:23.913 15:35:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02dfa913-00e4-4a25-ab2c-855f7283d4db 00:12:23.913 15:35:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=02dfa913-00e4-4a25-ab2c-855f7283d4db 00:12:23.913 15:35:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:23.913 15:35:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:23.913 15:35:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:23.913 15:35:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:23.913 15:35:25 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:23.914 15:35:25 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:23.914 15:35:25 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:23.914 15:35:25 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:23.914 15:35:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.914 15:35:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.914 15:35:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.914 15:35:25 -- paths/export.sh@5 -- # export PATH 00:12:23.914 15:35:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.914 15:35:25 -- nvmf/common.sh@47 -- # : 0 00:12:23.914 15:35:25 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:23.914 15:35:25 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:23.914 15:35:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:23.914 15:35:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:23.914 15:35:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:23.914 15:35:25 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:23.914 15:35:25 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:23.914 15:35:25 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:23.914 15:35:25 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:23.914 15:35:25 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:23.914 15:35:25 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:23.914 15:35:25 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:23.914 15:35:25 -- host/failover.sh@18 -- # nvmftestinit 00:12:23.914 15:35:25 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:23.914 15:35:25 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:23.914 15:35:25 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:23.914 15:35:25 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:23.914 15:35:25 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:23.914 15:35:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.914 15:35:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:23.914 15:35:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.914 15:35:25 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:12:23.914 15:35:25 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:12:23.914 15:35:25 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:12:23.914 15:35:25 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:12:23.914 15:35:25 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:12:23.914 15:35:25 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:12:23.914 15:35:25 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:23.914 15:35:25 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:23.914 15:35:25 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:23.914 15:35:25 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:23.914 15:35:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:23.914 15:35:25 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:23.914 15:35:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:23.914 15:35:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:23.914 15:35:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:23.914 15:35:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:23.914 15:35:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:23.914 15:35:25 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:23.914 15:35:25 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:23.914 15:35:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:23.914 Cannot find device "nvmf_tgt_br" 00:12:23.914 15:35:25 -- nvmf/common.sh@155 -- # true 00:12:23.914 15:35:25 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:23.914 Cannot find device "nvmf_tgt_br2" 00:12:23.914 15:35:25 -- nvmf/common.sh@156 -- # true 00:12:23.914 15:35:25 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:23.914 15:35:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:23.914 Cannot find device "nvmf_tgt_br" 00:12:23.914 15:35:25 -- nvmf/common.sh@158 -- # true 00:12:23.914 15:35:25 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:23.914 Cannot find device "nvmf_tgt_br2" 00:12:23.914 15:35:25 -- nvmf/common.sh@159 -- # true 00:12:23.914 15:35:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:24.173 15:35:25 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:24.173 15:35:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:24.173 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:24.173 15:35:25 -- nvmf/common.sh@162 -- # true 00:12:24.173 15:35:25 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:24.173 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:24.173 15:35:25 -- nvmf/common.sh@163 -- # true 00:12:24.173 15:35:25 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:24.173 15:35:25 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:24.173 15:35:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:24.173 15:35:25 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:24.173 15:35:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:24.173 15:35:25 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:24.173 15:35:25 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:24.173 15:35:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:24.173 15:35:25 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:24.173 15:35:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:24.173 15:35:25 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:24.173 15:35:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:24.173 15:35:25 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:24.173 15:35:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:24.173 15:35:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:24.173 15:35:25 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:24.173 15:35:25 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:24.173 15:35:25 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:24.173 15:35:25 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:24.173 15:35:25 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:24.173 15:35:25 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:24.173 15:35:25 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:24.173 15:35:25 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:24.173 15:35:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:24.173 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:24.173 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:12:24.173 00:12:24.173 --- 10.0.0.2 ping statistics --- 00:12:24.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.173 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:12:24.173 15:35:25 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:24.173 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:24.173 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:12:24.173 00:12:24.173 --- 10.0.0.3 ping statistics --- 00:12:24.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.173 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:12:24.173 15:35:25 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:24.173 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:24.173 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:12:24.173 00:12:24.173 --- 10.0.0.1 ping statistics --- 00:12:24.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.173 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:12:24.173 15:35:25 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:24.173 15:35:25 -- nvmf/common.sh@422 -- # return 0 00:12:24.173 15:35:25 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:24.173 15:35:25 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:24.173 15:35:25 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:24.173 15:35:25 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:24.173 15:35:25 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:24.173 15:35:25 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:24.173 15:35:25 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:24.173 15:35:25 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:12:24.173 15:35:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:24.173 15:35:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:24.173 15:35:25 -- common/autotest_common.sh@10 -- # set +x 00:12:24.173 15:35:25 -- nvmf/common.sh@470 -- # nvmfpid=72587 00:12:24.173 15:35:25 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:24.173 15:35:25 -- nvmf/common.sh@471 -- # waitforlisten 72587 00:12:24.173 15:35:25 -- common/autotest_common.sh@817 -- # '[' -z 72587 ']' 00:12:24.173 15:35:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.173 15:35:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:24.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.173 15:35:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.173 15:35:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:24.173 15:35:25 -- common/autotest_common.sh@10 -- # set +x 00:12:24.432 [2024-04-17 15:35:25.642255] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:12:24.432 [2024-04-17 15:35:25.642349] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:24.432 [2024-04-17 15:35:25.779046] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:24.692 [2024-04-17 15:35:25.926357] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:24.692 [2024-04-17 15:35:25.926414] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:24.692 [2024-04-17 15:35:25.926434] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:24.692 [2024-04-17 15:35:25.926446] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:24.692 [2024-04-17 15:35:25.926458] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:24.692 [2024-04-17 15:35:25.927105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:24.692 [2024-04-17 15:35:25.927230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:24.692 [2024-04-17 15:35:25.927240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:25.259 15:35:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:25.259 15:35:26 -- common/autotest_common.sh@850 -- # return 0 00:12:25.259 15:35:26 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:25.259 15:35:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:25.259 15:35:26 -- common/autotest_common.sh@10 -- # set +x 00:12:25.259 15:35:26 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:25.259 15:35:26 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:25.519 [2024-04-17 15:35:26.919554] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:25.519 15:35:26 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:12:26.085 Malloc0 00:12:26.085 15:35:27 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:26.343 15:35:27 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:26.343 15:35:27 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:26.910 [2024-04-17 15:35:28.049797] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:26.910 15:35:28 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:12:27.168 [2024-04-17 15:35:28.354196] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:12:27.168 15:35:28 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:12:27.168 [2024-04-17 15:35:28.598315] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:12:27.426 15:35:28 -- host/failover.sh@31 -- # bdevperf_pid=72650 00:12:27.426 15:35:28 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:12:27.426 15:35:28 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:27.426 15:35:28 -- host/failover.sh@34 -- # waitforlisten 72650 /var/tmp/bdevperf.sock 00:12:27.426 15:35:28 -- common/autotest_common.sh@817 -- # '[' -z 72650 ']' 00:12:27.426 15:35:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:27.426 15:35:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:27.426 15:35:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:27.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:27.426 15:35:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:27.426 15:35:28 -- common/autotest_common.sh@10 -- # set +x 00:12:28.360 15:35:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:28.360 15:35:29 -- common/autotest_common.sh@850 -- # return 0 00:12:28.360 15:35:29 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:28.618 NVMe0n1 00:12:28.618 15:35:29 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:28.877 00:12:28.877 15:35:30 -- host/failover.sh@39 -- # run_test_pid=72668 00:12:28.877 15:35:30 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:28.877 15:35:30 -- host/failover.sh@41 -- # sleep 1 00:12:30.254 15:35:31 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:30.254 [2024-04-17 15:35:31.496096] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257640 is same with the state(5) to be set 00:12:30.254 [2024-04-17 15:35:31.496160] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257640 is same with the state(5) to be set 00:12:30.254 [2024-04-17 15:35:31.496172] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257640 is same with the state(5) to be set 00:12:30.254 [2024-04-17 15:35:31.496181] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257640 is same with the state(5) to be set 00:12:30.254 [2024-04-17 15:35:31.496191] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257640 is same with the state(5) to be set 00:12:30.254 [2024-04-17 15:35:31.496199] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257640 is same with the state(5) to be set 00:12:30.254 [2024-04-17 15:35:31.496208] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257640 is same with the state(5) to be set 00:12:30.254 [2024-04-17 15:35:31.496216] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257640 is same with the state(5) to be set 00:12:30.254 [2024-04-17 15:35:31.496224] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257640 is same with the state(5) to be set 00:12:30.254 [2024-04-17 15:35:31.496232] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257640 is same with the state(5) to be set 00:12:30.254 [2024-04-17 15:35:31.496257] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257640 is same with the state(5) to be set 00:12:30.254 [2024-04-17 15:35:31.496266] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257640 is same with the state(5) to be set 00:12:30.254 [2024-04-17 15:35:31.496280] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257640 is same with the state(5) to be set 00:12:30.254 [2024-04-17 15:35:31.496288] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257640 is same with the state(5) to be set 00:12:30.254 [2024-04-17 15:35:31.496296] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257640 is same with the state(5) to be set 00:12:30.254 [2024-04-17 15:35:31.496303] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257640 is same with the state(5) to be set 00:12:30.254 [2024-04-17 15:35:31.496312] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257640 is same with the state(5) to be set 00:12:30.254 [2024-04-17 15:35:31.496319] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257640 is same with the state(5) to be set 00:12:30.254 [2024-04-17 15:35:31.496327] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257640 is same with the state(5) to be set 00:12:30.254 [2024-04-17 15:35:31.496335] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257640 is same with the state(5) to be set 00:12:30.254 [2024-04-17 15:35:31.496346] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257640 is same with the state(5) to be set 00:12:30.254 [2024-04-17 15:35:31.496770] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257640 is same with the state(5) to be set 00:12:30.254 [2024-04-17 15:35:31.496792] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257640 is same with the state(5) to be set 00:12:30.254 15:35:31 -- host/failover.sh@45 -- # sleep 3 00:12:33.537 15:35:34 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:33.537 00:12:33.537 15:35:34 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:12:33.795 [2024-04-17 15:35:35.087509] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d40 is same with the state(5) to be set 00:12:33.795 [2024-04-17 15:35:35.087580] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d40 is same with the state(5) to be set 00:12:33.795 [2024-04-17 15:35:35.087600] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d40 is same with the state(5) to be set 00:12:33.795 [2024-04-17 15:35:35.087614] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d40 is same with the state(5) to be set 00:12:33.795 [2024-04-17 15:35:35.087627] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d40 is same with the state(5) to be set 00:12:33.795 [2024-04-17 15:35:35.087639] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d40 is same with the state(5) to be set 00:12:33.795 [2024-04-17 15:35:35.087651] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d40 is same with the state(5) to be set 00:12:33.795 [2024-04-17 15:35:35.087663] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d40 is same with the state(5) to be set 00:12:33.795 [2024-04-17 15:35:35.087678] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d40 is same with the state(5) to be set 00:12:33.795 [2024-04-17 15:35:35.087691] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d40 is same with the state(5) to be set 00:12:33.795 [2024-04-17 15:35:35.087704] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d40 is same with the state(5) to be set 00:12:33.795 [2024-04-17 15:35:35.087717] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d40 is same with the state(5) to be set 00:12:33.795 [2024-04-17 15:35:35.087731] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d40 is same with the state(5) to be set 00:12:33.795 [2024-04-17 15:35:35.087743] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d40 is same with the state(5) to be set 00:12:33.795 [2024-04-17 15:35:35.087775] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d40 is same with the state(5) to be set 00:12:33.795 15:35:35 -- host/failover.sh@50 -- # sleep 3 00:12:37.079 15:35:38 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:37.079 [2024-04-17 15:35:38.362602] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:37.079 15:35:38 -- host/failover.sh@55 -- # sleep 1 00:12:38.014 15:35:39 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:12:38.271 [2024-04-17 15:35:39.688257] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12566b0 is same with the state(5) to be set 00:12:38.271 [2024-04-17 15:35:39.688314] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12566b0 is same with the state(5) to be set 00:12:38.271 [2024-04-17 15:35:39.688326] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12566b0 is same with the state(5) to be set 00:12:38.271 [2024-04-17 15:35:39.688335] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12566b0 is same with the state(5) to be set 00:12:38.271 [2024-04-17 15:35:39.688345] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12566b0 is same with the state(5) to be set 00:12:38.271 [2024-04-17 15:35:39.688355] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12566b0 is same with the state(5) to be set 00:12:38.271 [2024-04-17 15:35:39.688363] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12566b0 is same with the state(5) to be set 00:12:38.529 15:35:39 -- host/failover.sh@59 -- # wait 72668 00:12:45.093 0 00:12:45.093 15:35:45 -- host/failover.sh@61 -- # killprocess 72650 00:12:45.093 15:35:45 -- common/autotest_common.sh@936 -- # '[' -z 72650 ']' 00:12:45.093 15:35:45 -- common/autotest_common.sh@940 -- # kill -0 72650 00:12:45.093 15:35:45 -- common/autotest_common.sh@941 -- # uname 00:12:45.093 15:35:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:45.093 15:35:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72650 00:12:45.093 killing process with pid 72650 00:12:45.093 15:35:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:45.093 15:35:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:45.093 15:35:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72650' 00:12:45.093 15:35:45 -- common/autotest_common.sh@955 -- # kill 72650 00:12:45.093 15:35:45 -- common/autotest_common.sh@960 -- # wait 72650 00:12:45.093 15:35:45 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:12:45.094 [2024-04-17 15:35:28.658838] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:12:45.094 [2024-04-17 15:35:28.658957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72650 ] 00:12:45.094 [2024-04-17 15:35:28.790586] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.094 [2024-04-17 15:35:28.925851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.094 Running I/O for 15 seconds... 00:12:45.094 [2024-04-17 15:35:31.496898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:71488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.094 [2024-04-17 15:35:31.496985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.094 [2024-04-17 15:35:31.497036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:71496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.094 [2024-04-17 15:35:31.497069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.094 [2024-04-17 15:35:31.497101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:71504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.094 [2024-04-17 15:35:31.497129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.094 [2024-04-17 15:35:31.497161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:71512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.094 [2024-04-17 15:35:31.497187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.094 [2024-04-17 15:35:31.497215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:71520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.094 [2024-04-17 15:35:31.497241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.094 [2024-04-17 15:35:31.497268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:71528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.094 [2024-04-17 15:35:31.497286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.094 [2024-04-17 15:35:31.497302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.094 [2024-04-17 15:35:31.497323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.094 [2024-04-17 15:35:31.497352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:71544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.094 [2024-04-17 15:35:31.497386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.094 [2024-04-17 15:35:31.497415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:71552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.094 [2024-04-17 15:35:31.497440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.094 [2024-04-17 15:35:31.497465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:71560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.094 [2024-04-17 15:35:31.497488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.094 [2024-04-17 15:35:31.497515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:71568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.094 [2024-04-17 15:35:31.497538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.094 [2024-04-17 15:35:31.497615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:71576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.094 [2024-04-17 15:35:31.497638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.094 [2024-04-17 15:35:31.497656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:71584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.094 [2024-04-17 15:35:31.497670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.094 [2024-04-17 15:35:31.497685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:71592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.094 [2024-04-17 15:35:31.497699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.094 [2024-04-17 15:35:31.497715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:71600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.094 [2024-04-17 15:35:31.497729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.094 [2024-04-17 15:35:31.497744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:71608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.094 [2024-04-17 15:35:31.497775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.094 [2024-04-17 15:35:31.497792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:71616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.094 [2024-04-17 15:35:31.497810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.094 [2024-04-17 15:35:31.497827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.094 [2024-04-17 15:35:31.497841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.094 [2024-04-17 15:35:31.497857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:71632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.094 [2024-04-17 15:35:31.497871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.094 [2024-04-17 15:35:31.497886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:71640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.094 [2024-04-17 15:35:31.497900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.094 [2024-04-17 15:35:31.497916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:71648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.094 [2024-04-17 15:35:31.497930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.094 [2024-04-17 15:35:31.497945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:71656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.094 [2024-04-17 15:35:31.497959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.094 [2024-04-17 15:35:31.497974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:71664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.094 [2024-04-17 15:35:31.497989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.094 [2024-04-17 15:35:31.498004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:71672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.094 [2024-04-17 15:35:31.498029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.094 [2024-04-17 15:35:31.498045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.094 [2024-04-17 15:35:31.498060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.094 [2024-04-17 15:35:31.498075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:72200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.094 [2024-04-17 15:35:31.498089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.094 [2024-04-17 15:35:31.498105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.094 [2024-04-17 15:35:31.498119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.094 [2024-04-17 15:35:31.498134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.094 [2024-04-17 15:35:31.498148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.094 [2024-04-17 15:35:31.498164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.094 [2024-04-17 15:35:31.498178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.094 [2024-04-17 15:35:31.498194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.094 [2024-04-17 15:35:31.498208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.094 [2024-04-17 15:35:31.498227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:72240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.094 [2024-04-17 15:35:31.498254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.094 [2024-04-17 15:35:31.498273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:72248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.094 [2024-04-17 15:35:31.498288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.094 [2024-04-17 15:35:31.498304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:71680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.094 [2024-04-17 15:35:31.498318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.094 [2024-04-17 15:35:31.498334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:71688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.094 [2024-04-17 15:35:31.498348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.094 [2024-04-17 15:35:31.498363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:71696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.094 [2024-04-17 15:35:31.498377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.094 [2024-04-17 15:35:31.498392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:71704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.094 [2024-04-17 15:35:31.498406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.094 [2024-04-17 15:35:31.498430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:71712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.094 [2024-04-17 15:35:31.498445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.094 [2024-04-17 15:35:31.498461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:71720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.094 [2024-04-17 15:35:31.498474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.094 [2024-04-17 15:35:31.498490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.094 [2024-04-17 15:35:31.498503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.094 [2024-04-17 15:35:31.498519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:71736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.094 [2024-04-17 15:35:31.498533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.094 [2024-04-17 15:35:31.498548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:71744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.094 [2024-04-17 15:35:31.498562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.094 [2024-04-17 15:35:31.498578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.094 [2024-04-17 15:35:31.498591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.094 [2024-04-17 15:35:31.498607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:71760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.094 [2024-04-17 15:35:31.498620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.094 [2024-04-17 15:35:31.498636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.094 [2024-04-17 15:35:31.498663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.094 [2024-04-17 15:35:31.498682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.094 [2024-04-17 15:35:31.498697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.094 [2024-04-17 15:35:31.498713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:71784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.094 [2024-04-17 15:35:31.498727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.094 [2024-04-17 15:35:31.498743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:71792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.094 [2024-04-17 15:35:31.498770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.498788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:71800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.095 [2024-04-17 15:35:31.498803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.498818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:72256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.095 [2024-04-17 15:35:31.498832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.498856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:72264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.095 [2024-04-17 15:35:31.498871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.498886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:72272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.095 [2024-04-17 15:35:31.498900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.498915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:72280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.095 [2024-04-17 15:35:31.498929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.498945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.095 [2024-04-17 15:35:31.498959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.498974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.095 [2024-04-17 15:35:31.498988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.499004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.095 [2024-04-17 15:35:31.499018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.499033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.095 [2024-04-17 15:35:31.499047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.499062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:71808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.095 [2024-04-17 15:35:31.499076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.499092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.095 [2024-04-17 15:35:31.499106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.499122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:71824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.095 [2024-04-17 15:35:31.499136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.499151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:71832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.095 [2024-04-17 15:35:31.499165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.499180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:71840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.095 [2024-04-17 15:35:31.499194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.499210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:71848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.095 [2024-04-17 15:35:31.499241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.499264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:71856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.095 [2024-04-17 15:35:31.499279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.499295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:71864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.095 [2024-04-17 15:35:31.499309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.499324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:71872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.095 [2024-04-17 15:35:31.499338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.499353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:71880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.095 [2024-04-17 15:35:31.499367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.499383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.095 [2024-04-17 15:35:31.499396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.499412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:71896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.095 [2024-04-17 15:35:31.499426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.499441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:71904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.095 [2024-04-17 15:35:31.499455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.499470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:71912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.095 [2024-04-17 15:35:31.499484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.499500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:71920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.095 [2024-04-17 15:35:31.499524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.499540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:71928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.095 [2024-04-17 15:35:31.499554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.499569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:71936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.095 [2024-04-17 15:35:31.499583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.499599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:71944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.095 [2024-04-17 15:35:31.499613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.499635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:71952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.095 [2024-04-17 15:35:31.499650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.499666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:71960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.095 [2024-04-17 15:35:31.499680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.499695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:71968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.095 [2024-04-17 15:35:31.499708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.499725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:71976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.095 [2024-04-17 15:35:31.499740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.499769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:71984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.095 [2024-04-17 15:35:31.499785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.499801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:71992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.095 [2024-04-17 15:35:31.499815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.499831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.095 [2024-04-17 15:35:31.499845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.499860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.095 [2024-04-17 15:35:31.499874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.499889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:72336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.095 [2024-04-17 15:35:31.499903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.499919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:72344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.095 [2024-04-17 15:35:31.499933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.499948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.095 [2024-04-17 15:35:31.499962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.499977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.095 [2024-04-17 15:35:31.499991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.500007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.095 [2024-04-17 15:35:31.500034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.500050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.095 [2024-04-17 15:35:31.500065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.500080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.095 [2024-04-17 15:35:31.500094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.500109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.095 [2024-04-17 15:35:31.500123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.500139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.095 [2024-04-17 15:35:31.500153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.500169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.095 [2024-04-17 15:35:31.500182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.500198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:72416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.095 [2024-04-17 15:35:31.500212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.500235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.095 [2024-04-17 15:35:31.500257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.500283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:72432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.095 [2024-04-17 15:35:31.500297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.500312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.095 [2024-04-17 15:35:31.500326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.500342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:72000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.095 [2024-04-17 15:35:31.500355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.500370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:72008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.095 [2024-04-17 15:35:31.500384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.500399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:72016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.095 [2024-04-17 15:35:31.500413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.095 [2024-04-17 15:35:31.500428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.096 [2024-04-17 15:35:31.500450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.096 [2024-04-17 15:35:31.500465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:72032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.096 [2024-04-17 15:35:31.500479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.096 [2024-04-17 15:35:31.500495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:72040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.096 [2024-04-17 15:35:31.500508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.096 [2024-04-17 15:35:31.500524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:72048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.096 [2024-04-17 15:35:31.500543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.096 [2024-04-17 15:35:31.500559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.096 [2024-04-17 15:35:31.500573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.096 [2024-04-17 15:35:31.500588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:72064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.096 [2024-04-17 15:35:31.500602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.096 [2024-04-17 15:35:31.500617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.096 [2024-04-17 15:35:31.500631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.096 [2024-04-17 15:35:31.500646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:72080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.096 [2024-04-17 15:35:31.500660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.096 [2024-04-17 15:35:31.500675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:72088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.096 [2024-04-17 15:35:31.500688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.096 [2024-04-17 15:35:31.500704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:72096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.096 [2024-04-17 15:35:31.500717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.096 [2024-04-17 15:35:31.500733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.096 [2024-04-17 15:35:31.500746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.096 [2024-04-17 15:35:31.500777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:72112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.096 [2024-04-17 15:35:31.500792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.096 [2024-04-17 15:35:31.500807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:72120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.096 [2024-04-17 15:35:31.500822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.096 [2024-04-17 15:35:31.500845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.096 [2024-04-17 15:35:31.500859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.096 [2024-04-17 15:35:31.500875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.096 [2024-04-17 15:35:31.500889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.096 [2024-04-17 15:35:31.500905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.096 [2024-04-17 15:35:31.500919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.096 [2024-04-17 15:35:31.500934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:72472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.096 [2024-04-17 15:35:31.500948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.096 [2024-04-17 15:35:31.500963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.096 [2024-04-17 15:35:31.500977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.096 [2024-04-17 15:35:31.500993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.096 [2024-04-17 15:35:31.501007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.096 [2024-04-17 15:35:31.501022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:72496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.096 [2024-04-17 15:35:31.501042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.096 [2024-04-17 15:35:31.501059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.096 [2024-04-17 15:35:31.501073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.096 [2024-04-17 15:35:31.501088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:72128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.096 [2024-04-17 15:35:31.501102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.096 [2024-04-17 15:35:31.501117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:72136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.096 [2024-04-17 15:35:31.501131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.096 [2024-04-17 15:35:31.501147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.096 [2024-04-17 15:35:31.501160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.096 [2024-04-17 15:35:31.501176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:72152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.096 [2024-04-17 15:35:31.501190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.096 [2024-04-17 15:35:31.501205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:72160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.096 [2024-04-17 15:35:31.501230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.096 [2024-04-17 15:35:31.501257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:72168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.096 [2024-04-17 15:35:31.501273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.096 [2024-04-17 15:35:31.501288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:72176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.096 [2024-04-17 15:35:31.501307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.096 [2024-04-17 15:35:31.501322] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e25960 is same with the state(5) to be set 00:12:45.096 [2024-04-17 15:35:31.501340] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:12:45.096 [2024-04-17 15:35:31.501351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:12:45.096 [2024-04-17 15:35:31.501362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72184 len:8 PRP1 0x0 PRP2 0x0 00:12:45.096 [2024-04-17 15:35:31.501376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.096 [2024-04-17 15:35:31.501445] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e25960 was disconnected and freed. reset controller. 00:12:45.096 [2024-04-17 15:35:31.501464] bdev_nvme.c:1853:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:12:45.096 [2024-04-17 15:35:31.501526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:12:45.096 [2024-04-17 15:35:31.501547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.096 [2024-04-17 15:35:31.501563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:12:45.096 [2024-04-17 15:35:31.501577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.096 [2024-04-17 15:35:31.501591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:12:45.096 [2024-04-17 15:35:31.501605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.096 [2024-04-17 15:35:31.501619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:12:45.096 [2024-04-17 15:35:31.501633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.096 [2024-04-17 15:35:31.501653] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:12:45.096 [2024-04-17 15:35:31.501698] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dbf1d0 (9): Bad file descriptor 00:12:45.096 [2024-04-17 15:35:31.505577] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:12:45.096 [2024-04-17 15:35:31.544212] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:45.096 [2024-04-17 15:35:35.088263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:71728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.096 [2024-04-17 15:35:35.088323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.096 [2024-04-17 15:35:35.088353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:71736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.096 [2024-04-17 15:35:35.088396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.096 [2024-04-17 15:35:35.088414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:71744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.096 [2024-04-17 15:35:35.088428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.096 [2024-04-17 15:35:35.088444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:71752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.096 [2024-04-17 15:35:35.088458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.096 [2024-04-17 15:35:35.088474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:71760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.096 [2024-04-17 15:35:35.088488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.096 [2024-04-17 15:35:35.088504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:71768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.096 [2024-04-17 15:35:35.088517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.096 [2024-04-17 15:35:35.088533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:71776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.096 [2024-04-17 15:35:35.088547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.096 [2024-04-17 15:35:35.088563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.096 [2024-04-17 15:35:35.088577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.096 [2024-04-17 15:35:35.088593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:71792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.096 [2024-04-17 15:35:35.088607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.096 [2024-04-17 15:35:35.088622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.096 [2024-04-17 15:35:35.088637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.096 [2024-04-17 15:35:35.088653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:71808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.096 [2024-04-17 15:35:35.088667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.096 [2024-04-17 15:35:35.088682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:71816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.096 [2024-04-17 15:35:35.088696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.096 [2024-04-17 15:35:35.088712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:71824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.096 [2024-04-17 15:35:35.088726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.096 [2024-04-17 15:35:35.088742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:71832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.096 [2024-04-17 15:35:35.088770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.096 [2024-04-17 15:35:35.088796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.096 [2024-04-17 15:35:35.088812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.096 [2024-04-17 15:35:35.088827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.096 [2024-04-17 15:35:35.088841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.096 [2024-04-17 15:35:35.088857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.096 [2024-04-17 15:35:35.088874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.096 [2024-04-17 15:35:35.088891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.097 [2024-04-17 15:35:35.088905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.097 [2024-04-17 15:35:35.088921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.097 [2024-04-17 15:35:35.088935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.097 [2024-04-17 15:35:35.088951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.097 [2024-04-17 15:35:35.088967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.097 [2024-04-17 15:35:35.088985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:72400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.097 [2024-04-17 15:35:35.089000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.097 [2024-04-17 15:35:35.089016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.097 [2024-04-17 15:35:35.089031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.097 [2024-04-17 15:35:35.089048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:72416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.097 [2024-04-17 15:35:35.089063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.097 [2024-04-17 15:35:35.089080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.097 [2024-04-17 15:35:35.089095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.097 [2024-04-17 15:35:35.089112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:71856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.097 [2024-04-17 15:35:35.089128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.097 [2024-04-17 15:35:35.089144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:71864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.097 [2024-04-17 15:35:35.089159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.097 [2024-04-17 15:35:35.089175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:71872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.097 [2024-04-17 15:35:35.089190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.097 [2024-04-17 15:35:35.089226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:71880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.097 [2024-04-17 15:35:35.089241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.097 [2024-04-17 15:35:35.089258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:71888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.097 [2024-04-17 15:35:35.089273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.097 [2024-04-17 15:35:35.089289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:71896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.097 [2024-04-17 15:35:35.089304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.097 [2024-04-17 15:35:35.089320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:71904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.097 [2024-04-17 15:35:35.089335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.097 [2024-04-17 15:35:35.089351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:71912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.097 [2024-04-17 15:35:35.089366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.097 [2024-04-17 15:35:35.089382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:71920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.097 [2024-04-17 15:35:35.089397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.097 [2024-04-17 15:35:35.089414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:71928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.097 [2024-04-17 15:35:35.089429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.097 [2024-04-17 15:35:35.089446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:71936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.097 [2024-04-17 15:35:35.089461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.097 [2024-04-17 15:35:35.089477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.097 [2024-04-17 15:35:35.089493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.097 [2024-04-17 15:35:35.089509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:71952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.097 [2024-04-17 15:35:35.089524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.097 [2024-04-17 15:35:35.089540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:71960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.097 [2024-04-17 15:35:35.089555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.097 [2024-04-17 15:35:35.089572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:71968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.097 [2024-04-17 15:35:35.089587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.097 [2024-04-17 15:35:35.089604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:71976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.097 [2024-04-17 15:35:35.089626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.097 [2024-04-17 15:35:35.089643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:71984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.097 [2024-04-17 15:35:35.089658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.097 [2024-04-17 15:35:35.089675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:71992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.097 [2024-04-17 15:35:35.089689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.097 [2024-04-17 15:35:35.089706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:72000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.097 [2024-04-17 15:35:35.089720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.097 [2024-04-17 15:35:35.089737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.097 [2024-04-17 15:35:35.089762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.097 [2024-04-17 15:35:35.089782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:72016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.097 [2024-04-17 15:35:35.089796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.097 [2024-04-17 15:35:35.089812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:72024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.097 [2024-04-17 15:35:35.089827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.097 [2024-04-17 15:35:35.089843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:72032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.097 [2024-04-17 15:35:35.089858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.097 [2024-04-17 15:35:35.089875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:72040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.097 [2024-04-17 15:35:35.089889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.097 [2024-04-17 15:35:35.089906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.097 [2024-04-17 15:35:35.089921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.097 [2024-04-17 15:35:35.089938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:72440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.097 [2024-04-17 15:35:35.089953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.097 [2024-04-17 15:35:35.089970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:72448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.097 [2024-04-17 15:35:35.089984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.097 [2024-04-17 15:35:35.090001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.097 [2024-04-17 15:35:35.090016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.097 [2024-04-17 15:35:35.090040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.097 [2024-04-17 15:35:35.090056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.097 [2024-04-17 15:35:35.090072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.097 [2024-04-17 15:35:35.090087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.097 [2024-04-17 15:35:35.090104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:72480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.097 [2024-04-17 15:35:35.090118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.097 [2024-04-17 15:35:35.090135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.097 [2024-04-17 15:35:35.090150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.097 [2024-04-17 15:35:35.090167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:72048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.097 [2024-04-17 15:35:35.090182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.097 [2024-04-17 15:35:35.090198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:72056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.097 [2024-04-17 15:35:35.090212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.097 [2024-04-17 15:35:35.090229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.097 [2024-04-17 15:35:35.090243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.097 [2024-04-17 15:35:35.090270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:72072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.097 [2024-04-17 15:35:35.090285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.097 [2024-04-17 15:35:35.090301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:72080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.097 [2024-04-17 15:35:35.090316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.097 [2024-04-17 15:35:35.090332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:72088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.097 [2024-04-17 15:35:35.090348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.097 [2024-04-17 15:35:35.090364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.098 [2024-04-17 15:35:35.090379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.090395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:72104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.098 [2024-04-17 15:35:35.090410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.090426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:72112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.098 [2024-04-17 15:35:35.090447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.090465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:72120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.098 [2024-04-17 15:35:35.090480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.090496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:72128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.098 [2024-04-17 15:35:35.090511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.090527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:72136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.098 [2024-04-17 15:35:35.090542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.090558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:72144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.098 [2024-04-17 15:35:35.090573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.090589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:72152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.098 [2024-04-17 15:35:35.090604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.090620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:72160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.098 [2024-04-17 15:35:35.090636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.090652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:72168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.098 [2024-04-17 15:35:35.090679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.090698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:72496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.098 [2024-04-17 15:35:35.090712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.090728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:72504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.098 [2024-04-17 15:35:35.090742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.090780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:72512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.098 [2024-04-17 15:35:35.090796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.090812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.098 [2024-04-17 15:35:35.090826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.090841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.098 [2024-04-17 15:35:35.090855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.090880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.098 [2024-04-17 15:35:35.090896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.090912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.098 [2024-04-17 15:35:35.090925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.090941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.098 [2024-04-17 15:35:35.090955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.090971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.098 [2024-04-17 15:35:35.090985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.091001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.098 [2024-04-17 15:35:35.091015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.091031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.098 [2024-04-17 15:35:35.091045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.091061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.098 [2024-04-17 15:35:35.091075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.091091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:72592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.098 [2024-04-17 15:35:35.091106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.091121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.098 [2024-04-17 15:35:35.091135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.091151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:72608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.098 [2024-04-17 15:35:35.091174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.091191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.098 [2024-04-17 15:35:35.091206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.091221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.098 [2024-04-17 15:35:35.091236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.091251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.098 [2024-04-17 15:35:35.091266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.091288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.098 [2024-04-17 15:35:35.091302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.091318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.098 [2024-04-17 15:35:35.091332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.091348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:72656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.098 [2024-04-17 15:35:35.091362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.091378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:72176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.098 [2024-04-17 15:35:35.091392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.091408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:72184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.098 [2024-04-17 15:35:35.091421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.091437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:72192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.098 [2024-04-17 15:35:35.091451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.091466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:72200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.098 [2024-04-17 15:35:35.091480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.091496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:72208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.098 [2024-04-17 15:35:35.091509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.091525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:72216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.098 [2024-04-17 15:35:35.091539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.091554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:72224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.098 [2024-04-17 15:35:35.091568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.091583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:72232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.098 [2024-04-17 15:35:35.091597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.091613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.098 [2024-04-17 15:35:35.091627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.091643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:72672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.098 [2024-04-17 15:35:35.091676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.091693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.098 [2024-04-17 15:35:35.091707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.091723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.098 [2024-04-17 15:35:35.091736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.091762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.098 [2024-04-17 15:35:35.091779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.091795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.098 [2024-04-17 15:35:35.091809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.091824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:72712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.098 [2024-04-17 15:35:35.091838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.091853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:72720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.098 [2024-04-17 15:35:35.091867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.091883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:72728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.098 [2024-04-17 15:35:35.091897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.091912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.098 [2024-04-17 15:35:35.091926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.091942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.098 [2024-04-17 15:35:35.091956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.091971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:72240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.098 [2024-04-17 15:35:35.091985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.092009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:72248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.098 [2024-04-17 15:35:35.092022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.092038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:72256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.098 [2024-04-17 15:35:35.092052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.098 [2024-04-17 15:35:35.092075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:72264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.099 [2024-04-17 15:35:35.092089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.099 [2024-04-17 15:35:35.092105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:72272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.099 [2024-04-17 15:35:35.092118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.099 [2024-04-17 15:35:35.092134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:72280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.099 [2024-04-17 15:35:35.092148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.099 [2024-04-17 15:35:35.092164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:72288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.099 [2024-04-17 15:35:35.092188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.099 [2024-04-17 15:35:35.092205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:72296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.099 [2024-04-17 15:35:35.092218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.099 [2024-04-17 15:35:35.092234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:72304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.099 [2024-04-17 15:35:35.092248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.099 [2024-04-17 15:35:35.092263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:72312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.099 [2024-04-17 15:35:35.092278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.099 [2024-04-17 15:35:35.092293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:72320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.099 [2024-04-17 15:35:35.092307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.099 [2024-04-17 15:35:35.092322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:72328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.099 [2024-04-17 15:35:35.092336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.099 [2024-04-17 15:35:35.092352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:72336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.099 [2024-04-17 15:35:35.092366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.099 [2024-04-17 15:35:35.092381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.099 [2024-04-17 15:35:35.092395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.099 [2024-04-17 15:35:35.092411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:72352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.099 [2024-04-17 15:35:35.092425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.099 [2024-04-17 15:35:35.092440] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc2910 is same with the state(5) to be set 00:12:45.099 [2024-04-17 15:35:35.092464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:12:45.099 [2024-04-17 15:35:35.092476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:12:45.099 [2024-04-17 15:35:35.092487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72360 len:8 PRP1 0x0 PRP2 0x0 00:12:45.099 [2024-04-17 15:35:35.092500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.099 [2024-04-17 15:35:35.092571] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1dc2910 was disconnected and freed. reset controller. 00:12:45.099 [2024-04-17 15:35:35.092590] bdev_nvme.c:1853:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:12:45.099 [2024-04-17 15:35:35.092648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:12:45.099 [2024-04-17 15:35:35.092675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.099 [2024-04-17 15:35:35.092691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:12:45.099 [2024-04-17 15:35:35.092705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.099 [2024-04-17 15:35:35.092719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:12:45.099 [2024-04-17 15:35:35.092733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.099 [2024-04-17 15:35:35.092748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:12:45.099 [2024-04-17 15:35:35.092777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.099 [2024-04-17 15:35:35.092803] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:12:45.099 [2024-04-17 15:35:35.096634] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:12:45.099 [2024-04-17 15:35:35.096681] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dbf1d0 (9): Bad file descriptor 00:12:45.099 [2024-04-17 15:35:35.134453] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:45.099 [2024-04-17 15:35:39.688418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:12:45.099 [2024-04-17 15:35:39.688482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.099 [2024-04-17 15:35:39.688510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:12:45.099 [2024-04-17 15:35:39.688524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.099 [2024-04-17 15:35:39.688546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:12:45.099 [2024-04-17 15:35:39.688560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.099 [2024-04-17 15:35:39.688575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:12:45.099 [2024-04-17 15:35:39.688588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.099 [2024-04-17 15:35:39.688603] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf1d0 is same with the state(5) to be set 00:12:45.099 [2024-04-17 15:35:39.688671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.099 [2024-04-17 15:35:39.688727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.099 [2024-04-17 15:35:39.688768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.099 [2024-04-17 15:35:39.688787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.099 [2024-04-17 15:35:39.688803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.099 [2024-04-17 15:35:39.688816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.099 [2024-04-17 15:35:39.688832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.099 [2024-04-17 15:35:39.688845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.099 [2024-04-17 15:35:39.688861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.099 [2024-04-17 15:35:39.688874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.099 [2024-04-17 15:35:39.688890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.099 [2024-04-17 15:35:39.688903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.099 [2024-04-17 15:35:39.688919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.099 [2024-04-17 15:35:39.688932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.099 [2024-04-17 15:35:39.688947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:8928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.099 [2024-04-17 15:35:39.688961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.099 [2024-04-17 15:35:39.688977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.099 [2024-04-17 15:35:39.688991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.099 [2024-04-17 15:35:39.689007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:9456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.099 [2024-04-17 15:35:39.689021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.099 [2024-04-17 15:35:39.689036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.099 [2024-04-17 15:35:39.689049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.099 [2024-04-17 15:35:39.689065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.099 [2024-04-17 15:35:39.689079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.099 [2024-04-17 15:35:39.689098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.099 [2024-04-17 15:35:39.689112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.099 [2024-04-17 15:35:39.689138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.099 [2024-04-17 15:35:39.689153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.099 [2024-04-17 15:35:39.689170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.099 [2024-04-17 15:35:39.689183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.099 [2024-04-17 15:35:39.689199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:9504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.099 [2024-04-17 15:35:39.689213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.099 [2024-04-17 15:35:39.689229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.099 [2024-04-17 15:35:39.689243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.099 [2024-04-17 15:35:39.689258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.099 [2024-04-17 15:35:39.689272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.099 [2024-04-17 15:35:39.689287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.099 [2024-04-17 15:35:39.689301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.099 [2024-04-17 15:35:39.689317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.099 [2024-04-17 15:35:39.689331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.099 [2024-04-17 15:35:39.689347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:9544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.099 [2024-04-17 15:35:39.689360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.099 [2024-04-17 15:35:39.689376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.099 [2024-04-17 15:35:39.689390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.099 [2024-04-17 15:35:39.689405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.099 [2024-04-17 15:35:39.689419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.099 [2024-04-17 15:35:39.689434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:9568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.099 [2024-04-17 15:35:39.689448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.099 [2024-04-17 15:35:39.689464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.099 [2024-04-17 15:35:39.689478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.099 [2024-04-17 15:35:39.689493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.099 [2024-04-17 15:35:39.689514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.099 [2024-04-17 15:35:39.689531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.099 [2024-04-17 15:35:39.689545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.099 [2024-04-17 15:35:39.689562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.100 [2024-04-17 15:35:39.689576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.100 [2024-04-17 15:35:39.689596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.100 [2024-04-17 15:35:39.689611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.100 [2024-04-17 15:35:39.689627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.100 [2024-04-17 15:35:39.689642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.100 [2024-04-17 15:35:39.689657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.100 [2024-04-17 15:35:39.689671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.100 [2024-04-17 15:35:39.689687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.100 [2024-04-17 15:35:39.689701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.100 [2024-04-17 15:35:39.689717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.100 [2024-04-17 15:35:39.689731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.100 [2024-04-17 15:35:39.689747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.100 [2024-04-17 15:35:39.689773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.100 [2024-04-17 15:35:39.689790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.100 [2024-04-17 15:35:39.689805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.100 [2024-04-17 15:35:39.689821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.100 [2024-04-17 15:35:39.689835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.100 [2024-04-17 15:35:39.689851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.100 [2024-04-17 15:35:39.689864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.100 [2024-04-17 15:35:39.689880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.100 [2024-04-17 15:35:39.689894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.100 [2024-04-17 15:35:39.689910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.100 [2024-04-17 15:35:39.690022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.100 [2024-04-17 15:35:39.690042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.100 [2024-04-17 15:35:39.690056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.100 [2024-04-17 15:35:39.690072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.100 [2024-04-17 15:35:39.690087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.100 [2024-04-17 15:35:39.690102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.100 [2024-04-17 15:35:39.690116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.100 [2024-04-17 15:35:39.690132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.100 [2024-04-17 15:35:39.690146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.100 [2024-04-17 15:35:39.690162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.100 [2024-04-17 15:35:39.690186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.100 [2024-04-17 15:35:39.690202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.100 [2024-04-17 15:35:39.690216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.100 [2024-04-17 15:35:39.690238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.100 [2024-04-17 15:35:39.690253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.100 [2024-04-17 15:35:39.690268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:9624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.100 [2024-04-17 15:35:39.690282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.100 [2024-04-17 15:35:39.690298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.100 [2024-04-17 15:35:39.690312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.100 [2024-04-17 15:35:39.690328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:9640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.100 [2024-04-17 15:35:39.690342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.100 [2024-04-17 15:35:39.690358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.100 [2024-04-17 15:35:39.690371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.100 [2024-04-17 15:35:39.690387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.100 [2024-04-17 15:35:39.690401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.100 [2024-04-17 15:35:39.690424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.100 [2024-04-17 15:35:39.690439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.100 [2024-04-17 15:35:39.690454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:9672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.100 [2024-04-17 15:35:39.690468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.100 [2024-04-17 15:35:39.690484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.100 [2024-04-17 15:35:39.690498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.100 [2024-04-17 15:35:39.690514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.100 [2024-04-17 15:35:39.690528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.100 [2024-04-17 15:35:39.690543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.100 [2024-04-17 15:35:39.690558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.100 [2024-04-17 15:35:39.690573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.100 [2024-04-17 15:35:39.690587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.100 [2024-04-17 15:35:39.690602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.100 [2024-04-17 15:35:39.690616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.100 [2024-04-17 15:35:39.690632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.100 [2024-04-17 15:35:39.690646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.100 [2024-04-17 15:35:39.690662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.100 [2024-04-17 15:35:39.690676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.100 [2024-04-17 15:35:39.690706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.100 [2024-04-17 15:35:39.690720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.100 [2024-04-17 15:35:39.690742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.100 [2024-04-17 15:35:39.690766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.100 [2024-04-17 15:35:39.690784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.100 [2024-04-17 15:35:39.690798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.100 [2024-04-17 15:35:39.690814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.100 [2024-04-17 15:35:39.690836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.100 [2024-04-17 15:35:39.690853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.100 [2024-04-17 15:35:39.690867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.100 [2024-04-17 15:35:39.690882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.100 [2024-04-17 15:35:39.690896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.100 [2024-04-17 15:35:39.690922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.100 [2024-04-17 15:35:39.690936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.100 [2024-04-17 15:35:39.690953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.100 [2024-04-17 15:35:39.690967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.100 [2024-04-17 15:35:39.690983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.100 [2024-04-17 15:35:39.690997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.100 [2024-04-17 15:35:39.691013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.100 [2024-04-17 15:35:39.691026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.100 [2024-04-17 15:35:39.691042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.100 [2024-04-17 15:35:39.691056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.100 [2024-04-17 15:35:39.691071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.100 [2024-04-17 15:35:39.691085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.100 [2024-04-17 15:35:39.691101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.100 [2024-04-17 15:35:39.691115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.100 [2024-04-17 15:35:39.691130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.100 [2024-04-17 15:35:39.691144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.100 [2024-04-17 15:35:39.691160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.100 [2024-04-17 15:35:39.691173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.100 [2024-04-17 15:35:39.691190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.101 [2024-04-17 15:35:39.691203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.691225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.101 [2024-04-17 15:35:39.691240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.691256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.101 [2024-04-17 15:35:39.691270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.691286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.101 [2024-04-17 15:35:39.691300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.691315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.101 [2024-04-17 15:35:39.691338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.691354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.101 [2024-04-17 15:35:39.691367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.691383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.101 [2024-04-17 15:35:39.691397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.691413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.101 [2024-04-17 15:35:39.691427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.691443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.101 [2024-04-17 15:35:39.691457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.691472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.101 [2024-04-17 15:35:39.691486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.691502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.101 [2024-04-17 15:35:39.691516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.691532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.101 [2024-04-17 15:35:39.691545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.691561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:9760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.101 [2024-04-17 15:35:39.691575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.691591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.101 [2024-04-17 15:35:39.691604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.691626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:9776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.101 [2024-04-17 15:35:39.691641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.691657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:9784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.101 [2024-04-17 15:35:39.691670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.691686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:9792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.101 [2024-04-17 15:35:39.691700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.691716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.101 [2024-04-17 15:35:39.691731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.691746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.101 [2024-04-17 15:35:39.691772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.691788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.101 [2024-04-17 15:35:39.691803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.691819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.101 [2024-04-17 15:35:39.691833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.691849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.101 [2024-04-17 15:35:39.691863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.691878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.101 [2024-04-17 15:35:39.691892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.691919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.101 [2024-04-17 15:35:39.691944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.691961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.101 [2024-04-17 15:35:39.691975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.691990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.101 [2024-04-17 15:35:39.692004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.692020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.101 [2024-04-17 15:35:39.692040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.692056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.101 [2024-04-17 15:35:39.692071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.692086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.101 [2024-04-17 15:35:39.692100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.692115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.101 [2024-04-17 15:35:39.692129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.692145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.101 [2024-04-17 15:35:39.692159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.692174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:9848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.101 [2024-04-17 15:35:39.692189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.692204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.101 [2024-04-17 15:35:39.692218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.692234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.101 [2024-04-17 15:35:39.692248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.692264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.101 [2024-04-17 15:35:39.692278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.692294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.101 [2024-04-17 15:35:39.692308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.692323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.101 [2024-04-17 15:35:39.692337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.692353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.101 [2024-04-17 15:35:39.692367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.692382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.101 [2024-04-17 15:35:39.692396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.692456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.101 [2024-04-17 15:35:39.692474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.692493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.101 [2024-04-17 15:35:39.692507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.692523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.101 [2024-04-17 15:35:39.692536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.692552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.101 [2024-04-17 15:35:39.692566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.692581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.101 [2024-04-17 15:35:39.692595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.692610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.101 [2024-04-17 15:35:39.692624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.692645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.101 [2024-04-17 15:35:39.692659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.692675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.101 [2024-04-17 15:35:39.692688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.692704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.101 [2024-04-17 15:35:39.692717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.692733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.101 [2024-04-17 15:35:39.692747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.692777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.101 [2024-04-17 15:35:39.692793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.692809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.101 [2024-04-17 15:35:39.692823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.692838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.101 [2024-04-17 15:35:39.692852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.101 [2024-04-17 15:35:39.692875] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc2910 is same with the state(5) to be set 00:12:45.101 [2024-04-17 15:35:39.692894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:12:45.101 [2024-04-17 15:35:39.692905] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:12:45.101 [2024-04-17 15:35:39.692916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9440 len:8 PRP1 0x0 PRP2 0x0 00:12:45.102 [2024-04-17 15:35:39.692929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.102 [2024-04-17 15:35:39.692999] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1dc2910 was disconnected and freed. reset controller. 00:12:45.102 [2024-04-17 15:35:39.693024] bdev_nvme.c:1853:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:12:45.102 [2024-04-17 15:35:39.693040] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:12:45.102 [2024-04-17 15:35:39.696869] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:12:45.102 [2024-04-17 15:35:39.696913] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dbf1d0 (9): Bad file descriptor 00:12:45.102 [2024-04-17 15:35:39.731205] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:45.102 00:12:45.102 Latency(us) 00:12:45.102 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:45.102 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:45.102 Verification LBA range: start 0x0 length 0x4000 00:12:45.102 NVMe0n1 : 15.01 8699.00 33.98 230.05 0.00 14301.87 629.29 17277.67 00:12:45.102 =================================================================================================================== 00:12:45.102 Total : 8699.00 33.98 230.05 0.00 14301.87 629.29 17277.67 00:12:45.102 Received shutdown signal, test time was about 15.000000 seconds 00:12:45.102 00:12:45.102 Latency(us) 00:12:45.102 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:45.102 =================================================================================================================== 00:12:45.102 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:45.102 15:35:45 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:12:45.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:45.102 15:35:45 -- host/failover.sh@65 -- # count=3 00:12:45.102 15:35:45 -- host/failover.sh@67 -- # (( count != 3 )) 00:12:45.102 15:35:45 -- host/failover.sh@73 -- # bdevperf_pid=72846 00:12:45.102 15:35:45 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:12:45.102 15:35:45 -- host/failover.sh@75 -- # waitforlisten 72846 /var/tmp/bdevperf.sock 00:12:45.102 15:35:45 -- common/autotest_common.sh@817 -- # '[' -z 72846 ']' 00:12:45.102 15:35:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:45.102 15:35:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:45.102 15:35:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:45.102 15:35:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:45.102 15:35:45 -- common/autotest_common.sh@10 -- # set +x 00:12:45.668 15:35:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:45.668 15:35:46 -- common/autotest_common.sh@850 -- # return 0 00:12:45.668 15:35:46 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:12:45.668 [2024-04-17 15:35:47.054279] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:12:45.668 15:35:47 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:12:45.925 [2024-04-17 15:35:47.290536] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:12:45.925 15:35:47 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:46.184 NVMe0n1 00:12:46.184 15:35:47 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:46.749 00:12:46.749 15:35:47 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:46.749 00:12:47.008 15:35:48 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:12:47.008 15:35:48 -- host/failover.sh@82 -- # grep -q NVMe0 00:12:47.265 15:35:48 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:47.523 15:35:48 -- host/failover.sh@87 -- # sleep 3 00:12:50.826 15:35:51 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:12:50.826 15:35:51 -- host/failover.sh@88 -- # grep -q NVMe0 00:12:50.826 15:35:52 -- host/failover.sh@90 -- # run_test_pid=72929 00:12:50.826 15:35:52 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:50.826 15:35:52 -- host/failover.sh@92 -- # wait 72929 00:12:51.761 0 00:12:51.761 15:35:53 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:12:51.761 [2024-04-17 15:35:45.858930] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:12:51.761 [2024-04-17 15:35:45.859040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72846 ] 00:12:51.761 [2024-04-17 15:35:45.996152] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.761 [2024-04-17 15:35:46.132188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.761 [2024-04-17 15:35:48.715425] bdev_nvme.c:1853:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:12:51.761 [2024-04-17 15:35:48.715577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:12:51.761 [2024-04-17 15:35:48.715605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.761 [2024-04-17 15:35:48.715625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:12:51.761 [2024-04-17 15:35:48.715639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.761 [2024-04-17 15:35:48.715654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:12:51.761 [2024-04-17 15:35:48.715669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.761 [2024-04-17 15:35:48.715683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:12:51.761 [2024-04-17 15:35:48.715701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.761 [2024-04-17 15:35:48.715720] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:12:51.761 [2024-04-17 15:35:48.715794] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:12:51.761 [2024-04-17 15:35:48.715831] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d21d0 (9): Bad file descriptor 00:12:51.761 [2024-04-17 15:35:48.726512] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:51.761 Running I/O for 1 seconds... 00:12:51.761 00:12:51.761 Latency(us) 00:12:51.761 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:51.761 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:51.761 Verification LBA range: start 0x0 length 0x4000 00:12:51.761 NVMe0n1 : 1.02 6682.22 26.10 0.00 0.00 19074.85 2636.33 15609.48 00:12:51.761 =================================================================================================================== 00:12:51.761 Total : 6682.22 26.10 0.00 0.00 19074.85 2636.33 15609.48 00:12:51.761 15:35:53 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:12:51.761 15:35:53 -- host/failover.sh@95 -- # grep -q NVMe0 00:12:52.327 15:35:53 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:52.596 15:35:53 -- host/failover.sh@99 -- # grep -q NVMe0 00:12:52.596 15:35:53 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:12:52.854 15:35:54 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:53.112 15:35:54 -- host/failover.sh@101 -- # sleep 3 00:12:56.392 15:35:57 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:12:56.392 15:35:57 -- host/failover.sh@103 -- # grep -q NVMe0 00:12:56.392 15:35:57 -- host/failover.sh@108 -- # killprocess 72846 00:12:56.392 15:35:57 -- common/autotest_common.sh@936 -- # '[' -z 72846 ']' 00:12:56.392 15:35:57 -- common/autotest_common.sh@940 -- # kill -0 72846 00:12:56.392 15:35:57 -- common/autotest_common.sh@941 -- # uname 00:12:56.393 15:35:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:56.393 15:35:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72846 00:12:56.393 killing process with pid 72846 00:12:56.393 15:35:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:56.393 15:35:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:56.393 15:35:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72846' 00:12:56.393 15:35:57 -- common/autotest_common.sh@955 -- # kill 72846 00:12:56.393 15:35:57 -- common/autotest_common.sh@960 -- # wait 72846 00:12:56.651 15:35:58 -- host/failover.sh@110 -- # sync 00:12:56.651 15:35:58 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:56.909 15:35:58 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:12:56.909 15:35:58 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:12:56.909 15:35:58 -- host/failover.sh@116 -- # nvmftestfini 00:12:56.909 15:35:58 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:56.909 15:35:58 -- nvmf/common.sh@117 -- # sync 00:12:56.909 15:35:58 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:56.909 15:35:58 -- nvmf/common.sh@120 -- # set +e 00:12:56.909 15:35:58 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:56.909 15:35:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:56.909 rmmod nvme_tcp 00:12:56.909 rmmod nvme_fabrics 00:12:56.909 rmmod nvme_keyring 00:12:57.167 15:35:58 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:57.167 15:35:58 -- nvmf/common.sh@124 -- # set -e 00:12:57.167 15:35:58 -- nvmf/common.sh@125 -- # return 0 00:12:57.167 15:35:58 -- nvmf/common.sh@478 -- # '[' -n 72587 ']' 00:12:57.167 15:35:58 -- nvmf/common.sh@479 -- # killprocess 72587 00:12:57.167 15:35:58 -- common/autotest_common.sh@936 -- # '[' -z 72587 ']' 00:12:57.167 15:35:58 -- common/autotest_common.sh@940 -- # kill -0 72587 00:12:57.167 15:35:58 -- common/autotest_common.sh@941 -- # uname 00:12:57.167 15:35:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:57.167 15:35:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72587 00:12:57.167 15:35:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:57.167 15:35:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:57.167 killing process with pid 72587 00:12:57.167 15:35:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72587' 00:12:57.167 15:35:58 -- common/autotest_common.sh@955 -- # kill 72587 00:12:57.167 15:35:58 -- common/autotest_common.sh@960 -- # wait 72587 00:12:57.425 15:35:58 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:57.425 15:35:58 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:57.425 15:35:58 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:57.425 15:35:58 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:57.425 15:35:58 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:57.425 15:35:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.425 15:35:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:57.425 15:35:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.425 15:35:58 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:57.425 00:12:57.425 real 0m33.670s 00:12:57.425 user 2m9.951s 00:12:57.425 sys 0m6.023s 00:12:57.425 15:35:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:57.425 15:35:58 -- common/autotest_common.sh@10 -- # set +x 00:12:57.425 ************************************ 00:12:57.425 END TEST nvmf_failover 00:12:57.425 ************************************ 00:12:57.425 15:35:58 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:12:57.425 15:35:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:57.425 15:35:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:57.425 15:35:58 -- common/autotest_common.sh@10 -- # set +x 00:12:57.684 ************************************ 00:12:57.684 START TEST nvmf_discovery 00:12:57.684 ************************************ 00:12:57.684 15:35:58 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:12:57.684 * Looking for test storage... 00:12:57.684 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:12:57.684 15:35:59 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:57.684 15:35:59 -- nvmf/common.sh@7 -- # uname -s 00:12:57.684 15:35:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:57.684 15:35:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:57.684 15:35:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:57.684 15:35:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:57.684 15:35:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:57.684 15:35:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:57.684 15:35:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:57.684 15:35:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:57.684 15:35:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:57.684 15:35:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:57.684 15:35:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02dfa913-00e4-4a25-ab2c-855f7283d4db 00:12:57.684 15:35:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=02dfa913-00e4-4a25-ab2c-855f7283d4db 00:12:57.684 15:35:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:57.684 15:35:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:57.684 15:35:59 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:57.684 15:35:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:57.684 15:35:59 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:57.684 15:35:59 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:57.684 15:35:59 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:57.684 15:35:59 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:57.684 15:35:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.684 15:35:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.684 15:35:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.684 15:35:59 -- paths/export.sh@5 -- # export PATH 00:12:57.684 15:35:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.684 15:35:59 -- nvmf/common.sh@47 -- # : 0 00:12:57.684 15:35:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:57.684 15:35:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:57.684 15:35:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:57.684 15:35:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:57.684 15:35:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:57.684 15:35:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:57.684 15:35:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:57.684 15:35:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:57.684 15:35:59 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:12:57.684 15:35:59 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:12:57.684 15:35:59 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:57.684 15:35:59 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:12:57.684 15:35:59 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:12:57.684 15:35:59 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:12:57.684 15:35:59 -- host/discovery.sh@25 -- # nvmftestinit 00:12:57.684 15:35:59 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:57.684 15:35:59 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:57.684 15:35:59 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:57.685 15:35:59 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:57.685 15:35:59 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:57.685 15:35:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.685 15:35:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:57.685 15:35:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.685 15:35:59 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:12:57.685 15:35:59 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:12:57.685 15:35:59 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:12:57.685 15:35:59 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:12:57.685 15:35:59 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:12:57.685 15:35:59 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:12:57.685 15:35:59 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:57.685 15:35:59 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:57.685 15:35:59 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:57.685 15:35:59 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:57.685 15:35:59 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:57.685 15:35:59 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:57.685 15:35:59 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:57.685 15:35:59 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:57.685 15:35:59 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:57.685 15:35:59 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:57.685 15:35:59 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:57.685 15:35:59 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:57.685 15:35:59 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:57.685 15:35:59 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:57.685 Cannot find device "nvmf_tgt_br" 00:12:57.685 15:35:59 -- nvmf/common.sh@155 -- # true 00:12:57.685 15:35:59 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:57.685 Cannot find device "nvmf_tgt_br2" 00:12:57.685 15:35:59 -- nvmf/common.sh@156 -- # true 00:12:57.685 15:35:59 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:57.685 15:35:59 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:57.685 Cannot find device "nvmf_tgt_br" 00:12:57.685 15:35:59 -- nvmf/common.sh@158 -- # true 00:12:57.685 15:35:59 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:57.685 Cannot find device "nvmf_tgt_br2" 00:12:57.685 15:35:59 -- nvmf/common.sh@159 -- # true 00:12:57.685 15:35:59 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:57.943 15:35:59 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:57.943 15:35:59 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:57.943 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:57.943 15:35:59 -- nvmf/common.sh@162 -- # true 00:12:57.943 15:35:59 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:57.943 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:57.943 15:35:59 -- nvmf/common.sh@163 -- # true 00:12:57.943 15:35:59 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:57.943 15:35:59 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:57.943 15:35:59 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:57.943 15:35:59 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:57.943 15:35:59 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:57.943 15:35:59 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:57.943 15:35:59 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:57.943 15:35:59 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:57.943 15:35:59 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:57.943 15:35:59 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:57.943 15:35:59 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:57.943 15:35:59 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:57.943 15:35:59 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:57.943 15:35:59 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:57.943 15:35:59 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:57.943 15:35:59 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:57.943 15:35:59 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:57.943 15:35:59 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:57.943 15:35:59 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:57.943 15:35:59 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:57.943 15:35:59 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:57.943 15:35:59 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:57.943 15:35:59 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:57.943 15:35:59 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:57.943 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:57.943 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:12:57.943 00:12:57.943 --- 10.0.0.2 ping statistics --- 00:12:57.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.943 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:12:57.943 15:35:59 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:57.943 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:57.943 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:12:57.943 00:12:57.943 --- 10.0.0.3 ping statistics --- 00:12:57.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.943 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:12:57.943 15:35:59 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:57.943 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:57.943 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:12:57.943 00:12:57.943 --- 10.0.0.1 ping statistics --- 00:12:57.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.943 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:12:57.943 15:35:59 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:57.943 15:35:59 -- nvmf/common.sh@422 -- # return 0 00:12:57.943 15:35:59 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:57.943 15:35:59 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:57.943 15:35:59 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:57.943 15:35:59 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:57.943 15:35:59 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:57.943 15:35:59 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:57.943 15:35:59 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:58.202 15:35:59 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:12:58.202 15:35:59 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:58.202 15:35:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:58.202 15:35:59 -- common/autotest_common.sh@10 -- # set +x 00:12:58.202 15:35:59 -- nvmf/common.sh@470 -- # nvmfpid=73207 00:12:58.202 15:35:59 -- nvmf/common.sh@471 -- # waitforlisten 73207 00:12:58.202 15:35:59 -- common/autotest_common.sh@817 -- # '[' -z 73207 ']' 00:12:58.202 15:35:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.202 15:35:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:58.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.202 15:35:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.202 15:35:59 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:58.202 15:35:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:58.202 15:35:59 -- common/autotest_common.sh@10 -- # set +x 00:12:58.202 [2024-04-17 15:35:59.469053] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:12:58.202 [2024-04-17 15:35:59.469202] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:58.202 [2024-04-17 15:35:59.608104] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.460 [2024-04-17 15:35:59.759702] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:58.460 [2024-04-17 15:35:59.759794] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:58.460 [2024-04-17 15:35:59.759807] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:58.460 [2024-04-17 15:35:59.759816] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:58.460 [2024-04-17 15:35:59.759824] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:58.460 [2024-04-17 15:35:59.759853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:59.027 15:36:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:59.027 15:36:00 -- common/autotest_common.sh@850 -- # return 0 00:12:59.027 15:36:00 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:59.027 15:36:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:59.027 15:36:00 -- common/autotest_common.sh@10 -- # set +x 00:12:59.286 15:36:00 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:59.286 15:36:00 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:59.286 15:36:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:59.286 15:36:00 -- common/autotest_common.sh@10 -- # set +x 00:12:59.286 [2024-04-17 15:36:00.481620] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:59.286 15:36:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:59.286 15:36:00 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:12:59.286 15:36:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:59.286 15:36:00 -- common/autotest_common.sh@10 -- # set +x 00:12:59.286 [2024-04-17 15:36:00.489752] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:59.286 15:36:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:59.286 15:36:00 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:12:59.286 15:36:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:59.286 15:36:00 -- common/autotest_common.sh@10 -- # set +x 00:12:59.286 null0 00:12:59.286 15:36:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:59.286 15:36:00 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:12:59.286 15:36:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:59.286 15:36:00 -- common/autotest_common.sh@10 -- # set +x 00:12:59.286 null1 00:12:59.286 15:36:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:59.286 15:36:00 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:12:59.286 15:36:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:59.286 15:36:00 -- common/autotest_common.sh@10 -- # set +x 00:12:59.286 15:36:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:59.286 15:36:00 -- host/discovery.sh@45 -- # hostpid=73241 00:12:59.286 15:36:00 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:12:59.286 15:36:00 -- host/discovery.sh@46 -- # waitforlisten 73241 /tmp/host.sock 00:12:59.286 15:36:00 -- common/autotest_common.sh@817 -- # '[' -z 73241 ']' 00:12:59.286 15:36:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:12:59.286 15:36:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:59.286 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:12:59.286 15:36:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:12:59.286 15:36:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:59.286 15:36:00 -- common/autotest_common.sh@10 -- # set +x 00:12:59.286 [2024-04-17 15:36:00.581767] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:12:59.286 [2024-04-17 15:36:00.581887] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73241 ] 00:12:59.286 [2024-04-17 15:36:00.722919] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.550 [2024-04-17 15:36:00.866196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.124 15:36:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:00.124 15:36:01 -- common/autotest_common.sh@850 -- # return 0 00:13:00.124 15:36:01 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:00.124 15:36:01 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:13:00.124 15:36:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:00.124 15:36:01 -- common/autotest_common.sh@10 -- # set +x 00:13:00.124 15:36:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:00.124 15:36:01 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:13:00.124 15:36:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:00.124 15:36:01 -- common/autotest_common.sh@10 -- # set +x 00:13:00.124 15:36:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:00.124 15:36:01 -- host/discovery.sh@72 -- # notify_id=0 00:13:00.124 15:36:01 -- host/discovery.sh@83 -- # get_subsystem_names 00:13:00.124 15:36:01 -- host/discovery.sh@59 -- # sort 00:13:00.124 15:36:01 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:13:00.124 15:36:01 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:13:00.124 15:36:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:00.124 15:36:01 -- host/discovery.sh@59 -- # xargs 00:13:00.124 15:36:01 -- common/autotest_common.sh@10 -- # set +x 00:13:00.124 15:36:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:00.381 15:36:01 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:13:00.381 15:36:01 -- host/discovery.sh@84 -- # get_bdev_list 00:13:00.381 15:36:01 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:13:00.381 15:36:01 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:13:00.381 15:36:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:00.381 15:36:01 -- common/autotest_common.sh@10 -- # set +x 00:13:00.382 15:36:01 -- host/discovery.sh@55 -- # sort 00:13:00.382 15:36:01 -- host/discovery.sh@55 -- # xargs 00:13:00.382 15:36:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:00.382 15:36:01 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:13:00.382 15:36:01 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:13:00.382 15:36:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:00.382 15:36:01 -- common/autotest_common.sh@10 -- # set +x 00:13:00.382 15:36:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:00.382 15:36:01 -- host/discovery.sh@87 -- # get_subsystem_names 00:13:00.382 15:36:01 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:13:00.382 15:36:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:00.382 15:36:01 -- common/autotest_common.sh@10 -- # set +x 00:13:00.382 15:36:01 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:13:00.382 15:36:01 -- host/discovery.sh@59 -- # xargs 00:13:00.382 15:36:01 -- host/discovery.sh@59 -- # sort 00:13:00.382 15:36:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:00.382 15:36:01 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:13:00.382 15:36:01 -- host/discovery.sh@88 -- # get_bdev_list 00:13:00.382 15:36:01 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:13:00.382 15:36:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:00.382 15:36:01 -- common/autotest_common.sh@10 -- # set +x 00:13:00.382 15:36:01 -- host/discovery.sh@55 -- # sort 00:13:00.382 15:36:01 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:13:00.382 15:36:01 -- host/discovery.sh@55 -- # xargs 00:13:00.382 15:36:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:00.382 15:36:01 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:13:00.382 15:36:01 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:13:00.382 15:36:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:00.382 15:36:01 -- common/autotest_common.sh@10 -- # set +x 00:13:00.382 15:36:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:00.382 15:36:01 -- host/discovery.sh@91 -- # get_subsystem_names 00:13:00.382 15:36:01 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:13:00.382 15:36:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:00.382 15:36:01 -- common/autotest_common.sh@10 -- # set +x 00:13:00.382 15:36:01 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:13:00.382 15:36:01 -- host/discovery.sh@59 -- # sort 00:13:00.382 15:36:01 -- host/discovery.sh@59 -- # xargs 00:13:00.382 15:36:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:00.640 15:36:01 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:13:00.640 15:36:01 -- host/discovery.sh@92 -- # get_bdev_list 00:13:00.640 15:36:01 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:13:00.640 15:36:01 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:13:00.640 15:36:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:00.640 15:36:01 -- host/discovery.sh@55 -- # sort 00:13:00.640 15:36:01 -- common/autotest_common.sh@10 -- # set +x 00:13:00.640 15:36:01 -- host/discovery.sh@55 -- # xargs 00:13:00.640 15:36:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:00.640 15:36:01 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:13:00.640 15:36:01 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:00.640 15:36:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:00.640 15:36:01 -- common/autotest_common.sh@10 -- # set +x 00:13:00.640 [2024-04-17 15:36:01.894261] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.640 15:36:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:00.640 15:36:01 -- host/discovery.sh@97 -- # get_subsystem_names 00:13:00.640 15:36:01 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:13:00.640 15:36:01 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:13:00.640 15:36:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:00.640 15:36:01 -- host/discovery.sh@59 -- # sort 00:13:00.640 15:36:01 -- common/autotest_common.sh@10 -- # set +x 00:13:00.640 15:36:01 -- host/discovery.sh@59 -- # xargs 00:13:00.640 15:36:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:00.640 15:36:01 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:13:00.640 15:36:01 -- host/discovery.sh@98 -- # get_bdev_list 00:13:00.640 15:36:01 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:13:00.640 15:36:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:00.640 15:36:01 -- common/autotest_common.sh@10 -- # set +x 00:13:00.640 15:36:01 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:13:00.640 15:36:01 -- host/discovery.sh@55 -- # sort 00:13:00.640 15:36:01 -- host/discovery.sh@55 -- # xargs 00:13:00.640 15:36:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:00.640 15:36:02 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:13:00.640 15:36:02 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:13:00.640 15:36:02 -- host/discovery.sh@79 -- # expected_count=0 00:13:00.640 15:36:02 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:13:00.640 15:36:02 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:13:00.640 15:36:02 -- common/autotest_common.sh@901 -- # local max=10 00:13:00.640 15:36:02 -- common/autotest_common.sh@902 -- # (( max-- )) 00:13:00.640 15:36:02 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:13:00.640 15:36:02 -- common/autotest_common.sh@903 -- # get_notification_count 00:13:00.640 15:36:02 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:13:00.640 15:36:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:00.640 15:36:02 -- host/discovery.sh@74 -- # jq '. | length' 00:13:00.640 15:36:02 -- common/autotest_common.sh@10 -- # set +x 00:13:00.640 15:36:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:00.640 15:36:02 -- host/discovery.sh@74 -- # notification_count=0 00:13:00.640 15:36:02 -- host/discovery.sh@75 -- # notify_id=0 00:13:00.640 15:36:02 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:13:00.640 15:36:02 -- common/autotest_common.sh@904 -- # return 0 00:13:00.640 15:36:02 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:13:00.640 15:36:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:00.640 15:36:02 -- common/autotest_common.sh@10 -- # set +x 00:13:00.640 15:36:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:00.640 15:36:02 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:13:00.640 15:36:02 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:13:00.640 15:36:02 -- common/autotest_common.sh@901 -- # local max=10 00:13:00.640 15:36:02 -- common/autotest_common.sh@902 -- # (( max-- )) 00:13:00.640 15:36:02 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:13:00.640 15:36:02 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:13:00.898 15:36:02 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:13:00.898 15:36:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:00.898 15:36:02 -- common/autotest_common.sh@10 -- # set +x 00:13:00.898 15:36:02 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:13:00.898 15:36:02 -- host/discovery.sh@59 -- # sort 00:13:00.898 15:36:02 -- host/discovery.sh@59 -- # xargs 00:13:00.898 15:36:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:00.898 15:36:02 -- common/autotest_common.sh@903 -- # [[ '' == \n\v\m\e\0 ]] 00:13:00.898 15:36:02 -- common/autotest_common.sh@906 -- # sleep 1 00:13:01.155 [2024-04-17 15:36:02.527748] bdev_nvme.c:6898:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:13:01.155 [2024-04-17 15:36:02.527811] bdev_nvme.c:6978:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:13:01.155 [2024-04-17 15:36:02.527837] bdev_nvme.c:6861:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:13:01.155 [2024-04-17 15:36:02.533793] bdev_nvme.c:6827:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:13:01.155 [2024-04-17 15:36:02.590497] bdev_nvme.c:6717:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:13:01.155 [2024-04-17 15:36:02.590539] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:13:01.721 15:36:03 -- common/autotest_common.sh@902 -- # (( max-- )) 00:13:01.721 15:36:03 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:13:01.721 15:36:03 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:13:01.721 15:36:03 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:13:01.721 15:36:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:01.721 15:36:03 -- common/autotest_common.sh@10 -- # set +x 00:13:01.721 15:36:03 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:13:01.721 15:36:03 -- host/discovery.sh@59 -- # sort 00:13:01.721 15:36:03 -- host/discovery.sh@59 -- # xargs 00:13:01.721 15:36:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:01.981 15:36:03 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:01.981 15:36:03 -- common/autotest_common.sh@904 -- # return 0 00:13:01.981 15:36:03 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:13:01.981 15:36:03 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:13:01.981 15:36:03 -- common/autotest_common.sh@901 -- # local max=10 00:13:01.981 15:36:03 -- common/autotest_common.sh@902 -- # (( max-- )) 00:13:01.981 15:36:03 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:13:01.981 15:36:03 -- common/autotest_common.sh@903 -- # get_bdev_list 00:13:01.981 15:36:03 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:13:01.981 15:36:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:01.981 15:36:03 -- common/autotest_common.sh@10 -- # set +x 00:13:01.981 15:36:03 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:13:01.981 15:36:03 -- host/discovery.sh@55 -- # xargs 00:13:01.981 15:36:03 -- host/discovery.sh@55 -- # sort 00:13:01.981 15:36:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:01.981 15:36:03 -- common/autotest_common.sh@903 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:13:01.981 15:36:03 -- common/autotest_common.sh@904 -- # return 0 00:13:01.981 15:36:03 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:13:01.981 15:36:03 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:13:01.981 15:36:03 -- common/autotest_common.sh@901 -- # local max=10 00:13:01.981 15:36:03 -- common/autotest_common.sh@902 -- # (( max-- )) 00:13:01.981 15:36:03 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:13:01.981 15:36:03 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:13:01.981 15:36:03 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:13:01.981 15:36:03 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:13:01.981 15:36:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:01.981 15:36:03 -- common/autotest_common.sh@10 -- # set +x 00:13:01.981 15:36:03 -- host/discovery.sh@63 -- # sort -n 00:13:01.981 15:36:03 -- host/discovery.sh@63 -- # xargs 00:13:01.981 15:36:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:01.981 15:36:03 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0 ]] 00:13:01.981 15:36:03 -- common/autotest_common.sh@904 -- # return 0 00:13:01.981 15:36:03 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:13:01.981 15:36:03 -- host/discovery.sh@79 -- # expected_count=1 00:13:01.981 15:36:03 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:13:01.981 15:36:03 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:13:01.981 15:36:03 -- common/autotest_common.sh@901 -- # local max=10 00:13:01.981 15:36:03 -- common/autotest_common.sh@902 -- # (( max-- )) 00:13:01.981 15:36:03 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:13:01.981 15:36:03 -- common/autotest_common.sh@903 -- # get_notification_count 00:13:01.981 15:36:03 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:13:01.981 15:36:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:01.981 15:36:03 -- host/discovery.sh@74 -- # jq '. | length' 00:13:01.981 15:36:03 -- common/autotest_common.sh@10 -- # set +x 00:13:01.981 15:36:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:01.981 15:36:03 -- host/discovery.sh@74 -- # notification_count=1 00:13:01.981 15:36:03 -- host/discovery.sh@75 -- # notify_id=1 00:13:01.981 15:36:03 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:13:01.981 15:36:03 -- common/autotest_common.sh@904 -- # return 0 00:13:01.981 15:36:03 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:13:01.981 15:36:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:01.981 15:36:03 -- common/autotest_common.sh@10 -- # set +x 00:13:01.981 15:36:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:01.981 15:36:03 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:13:01.981 15:36:03 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:13:01.981 15:36:03 -- common/autotest_common.sh@901 -- # local max=10 00:13:01.981 15:36:03 -- common/autotest_common.sh@902 -- # (( max-- )) 00:13:01.981 15:36:03 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:13:01.981 15:36:03 -- common/autotest_common.sh@903 -- # get_bdev_list 00:13:01.981 15:36:03 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:13:01.981 15:36:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:01.981 15:36:03 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:13:01.981 15:36:03 -- common/autotest_common.sh@10 -- # set +x 00:13:01.981 15:36:03 -- host/discovery.sh@55 -- # sort 00:13:01.981 15:36:03 -- host/discovery.sh@55 -- # xargs 00:13:01.981 15:36:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:02.240 15:36:03 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:13:02.240 15:36:03 -- common/autotest_common.sh@904 -- # return 0 00:13:02.240 15:36:03 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:13:02.240 15:36:03 -- host/discovery.sh@79 -- # expected_count=1 00:13:02.240 15:36:03 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:13:02.240 15:36:03 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:13:02.240 15:36:03 -- common/autotest_common.sh@901 -- # local max=10 00:13:02.240 15:36:03 -- common/autotest_common.sh@902 -- # (( max-- )) 00:13:02.240 15:36:03 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:13:02.240 15:36:03 -- common/autotest_common.sh@903 -- # get_notification_count 00:13:02.240 15:36:03 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:13:02.240 15:36:03 -- host/discovery.sh@74 -- # jq '. | length' 00:13:02.240 15:36:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:02.240 15:36:03 -- common/autotest_common.sh@10 -- # set +x 00:13:02.240 15:36:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:02.240 15:36:03 -- host/discovery.sh@74 -- # notification_count=1 00:13:02.240 15:36:03 -- host/discovery.sh@75 -- # notify_id=2 00:13:02.240 15:36:03 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:13:02.240 15:36:03 -- common/autotest_common.sh@904 -- # return 0 00:13:02.240 15:36:03 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:13:02.240 15:36:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:02.240 15:36:03 -- common/autotest_common.sh@10 -- # set +x 00:13:02.240 [2024-04-17 15:36:03.495872] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:13:02.240 [2024-04-17 15:36:03.496818] bdev_nvme.c:6880:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:13:02.240 [2024-04-17 15:36:03.496861] bdev_nvme.c:6861:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:13:02.240 15:36:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:02.240 15:36:03 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:13:02.240 15:36:03 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:13:02.240 15:36:03 -- common/autotest_common.sh@901 -- # local max=10 00:13:02.240 15:36:03 -- common/autotest_common.sh@902 -- # (( max-- )) 00:13:02.240 15:36:03 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:13:02.240 [2024-04-17 15:36:03.502814] bdev_nvme.c:6822:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:13:02.240 15:36:03 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:13:02.240 15:36:03 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:13:02.240 15:36:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:02.240 15:36:03 -- host/discovery.sh@59 -- # sort 00:13:02.240 15:36:03 -- common/autotest_common.sh@10 -- # set +x 00:13:02.240 15:36:03 -- host/discovery.sh@59 -- # xargs 00:13:02.240 15:36:03 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:13:02.240 15:36:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:02.240 15:36:03 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:02.240 15:36:03 -- common/autotest_common.sh@904 -- # return 0 00:13:02.240 15:36:03 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:13:02.240 15:36:03 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:13:02.240 15:36:03 -- common/autotest_common.sh@901 -- # local max=10 00:13:02.240 15:36:03 -- common/autotest_common.sh@902 -- # (( max-- )) 00:13:02.240 15:36:03 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:13:02.240 [2024-04-17 15:36:03.560115] bdev_nvme.c:6717:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:13:02.240 [2024-04-17 15:36:03.560146] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:13:02.240 [2024-04-17 15:36:03.560154] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:13:02.240 15:36:03 -- common/autotest_common.sh@903 -- # get_bdev_list 00:13:02.240 15:36:03 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:13:02.240 15:36:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:02.240 15:36:03 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:13:02.240 15:36:03 -- common/autotest_common.sh@10 -- # set +x 00:13:02.240 15:36:03 -- host/discovery.sh@55 -- # sort 00:13:02.240 15:36:03 -- host/discovery.sh@55 -- # xargs 00:13:02.240 15:36:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:02.240 15:36:03 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:13:02.240 15:36:03 -- common/autotest_common.sh@904 -- # return 0 00:13:02.240 15:36:03 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:13:02.240 15:36:03 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:13:02.240 15:36:03 -- common/autotest_common.sh@901 -- # local max=10 00:13:02.240 15:36:03 -- common/autotest_common.sh@902 -- # (( max-- )) 00:13:02.240 15:36:03 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:13:02.240 15:36:03 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:13:02.240 15:36:03 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:13:02.240 15:36:03 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:13:02.240 15:36:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:02.240 15:36:03 -- host/discovery.sh@63 -- # sort -n 00:13:02.240 15:36:03 -- host/discovery.sh@63 -- # xargs 00:13:02.240 15:36:03 -- common/autotest_common.sh@10 -- # set +x 00:13:02.240 15:36:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:02.240 15:36:03 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:13:02.240 15:36:03 -- common/autotest_common.sh@904 -- # return 0 00:13:02.240 15:36:03 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:13:02.240 15:36:03 -- host/discovery.sh@79 -- # expected_count=0 00:13:02.240 15:36:03 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:13:02.240 15:36:03 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:13:02.240 15:36:03 -- common/autotest_common.sh@901 -- # local max=10 00:13:02.240 15:36:03 -- common/autotest_common.sh@902 -- # (( max-- )) 00:13:02.241 15:36:03 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:13:02.241 15:36:03 -- common/autotest_common.sh@903 -- # get_notification_count 00:13:02.241 15:36:03 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:13:02.241 15:36:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:02.241 15:36:03 -- common/autotest_common.sh@10 -- # set +x 00:13:02.241 15:36:03 -- host/discovery.sh@74 -- # jq '. | length' 00:13:02.499 15:36:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:02.499 15:36:03 -- host/discovery.sh@74 -- # notification_count=0 00:13:02.499 15:36:03 -- host/discovery.sh@75 -- # notify_id=2 00:13:02.500 15:36:03 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:13:02.500 15:36:03 -- common/autotest_common.sh@904 -- # return 0 00:13:02.500 15:36:03 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:02.500 15:36:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:02.500 15:36:03 -- common/autotest_common.sh@10 -- # set +x 00:13:02.500 [2024-04-17 15:36:03.737170] bdev_nvme.c:6880:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:13:02.500 [2024-04-17 15:36:03.737215] bdev_nvme.c:6861:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:13:02.500 [2024-04-17 15:36:03.737445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:02.500 [2024-04-17 15:36:03.737483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:02.500 [2024-04-17 15:36:03.737499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:02.500 [2024-04-17 15:36:03.737509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:02.500 [2024-04-17 15:36:03.737520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:02.500 [2024-04-17 15:36:03.737530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:02.500 [2024-04-17 15:36:03.737541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:02.500 [2024-04-17 15:36:03.737551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:02.500 [2024-04-17 15:36:03.737562] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba9fa0 is same with the state(5) to be set 00:13:02.500 15:36:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:02.500 15:36:03 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:13:02.500 15:36:03 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:13:02.500 15:36:03 -- common/autotest_common.sh@901 -- # local max=10 00:13:02.500 15:36:03 -- common/autotest_common.sh@902 -- # (( max-- )) 00:13:02.500 15:36:03 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:13:02.500 [2024-04-17 15:36:03.743153] bdev_nvme.c:6685:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:13:02.500 [2024-04-17 15:36:03.743192] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:13:02.500 [2024-04-17 15:36:03.743275] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba9fa0 (9): Bad file descriptor 00:13:02.500 15:36:03 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:13:02.500 15:36:03 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:13:02.500 15:36:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:02.500 15:36:03 -- common/autotest_common.sh@10 -- # set +x 00:13:02.500 15:36:03 -- host/discovery.sh@59 -- # sort 00:13:02.500 15:36:03 -- host/discovery.sh@59 -- # xargs 00:13:02.500 15:36:03 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:13:02.500 15:36:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:02.500 15:36:03 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:02.500 15:36:03 -- common/autotest_common.sh@904 -- # return 0 00:13:02.500 15:36:03 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:13:02.500 15:36:03 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:13:02.500 15:36:03 -- common/autotest_common.sh@901 -- # local max=10 00:13:02.500 15:36:03 -- common/autotest_common.sh@902 -- # (( max-- )) 00:13:02.500 15:36:03 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:13:02.500 15:36:03 -- common/autotest_common.sh@903 -- # get_bdev_list 00:13:02.500 15:36:03 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:13:02.500 15:36:03 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:13:02.500 15:36:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:02.500 15:36:03 -- host/discovery.sh@55 -- # sort 00:13:02.500 15:36:03 -- common/autotest_common.sh@10 -- # set +x 00:13:02.500 15:36:03 -- host/discovery.sh@55 -- # xargs 00:13:02.500 15:36:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:02.500 15:36:03 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:13:02.500 15:36:03 -- common/autotest_common.sh@904 -- # return 0 00:13:02.500 15:36:03 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:13:02.500 15:36:03 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:13:02.500 15:36:03 -- common/autotest_common.sh@901 -- # local max=10 00:13:02.500 15:36:03 -- common/autotest_common.sh@902 -- # (( max-- )) 00:13:02.500 15:36:03 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:13:02.500 15:36:03 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:13:02.500 15:36:03 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:13:02.500 15:36:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:02.500 15:36:03 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:13:02.500 15:36:03 -- common/autotest_common.sh@10 -- # set +x 00:13:02.500 15:36:03 -- host/discovery.sh@63 -- # sort -n 00:13:02.500 15:36:03 -- host/discovery.sh@63 -- # xargs 00:13:02.500 15:36:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:02.500 15:36:03 -- common/autotest_common.sh@903 -- # [[ 4421 == \4\4\2\1 ]] 00:13:02.500 15:36:03 -- common/autotest_common.sh@904 -- # return 0 00:13:02.500 15:36:03 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:13:02.500 15:36:03 -- host/discovery.sh@79 -- # expected_count=0 00:13:02.500 15:36:03 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:13:02.500 15:36:03 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:13:02.500 15:36:03 -- common/autotest_common.sh@901 -- # local max=10 00:13:02.500 15:36:03 -- common/autotest_common.sh@902 -- # (( max-- )) 00:13:02.500 15:36:03 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:13:02.500 15:36:03 -- common/autotest_common.sh@903 -- # get_notification_count 00:13:02.500 15:36:03 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:13:02.500 15:36:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:02.500 15:36:03 -- common/autotest_common.sh@10 -- # set +x 00:13:02.500 15:36:03 -- host/discovery.sh@74 -- # jq '. | length' 00:13:02.500 15:36:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:02.758 15:36:03 -- host/discovery.sh@74 -- # notification_count=0 00:13:02.758 15:36:03 -- host/discovery.sh@75 -- # notify_id=2 00:13:02.758 15:36:03 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:13:02.758 15:36:03 -- common/autotest_common.sh@904 -- # return 0 00:13:02.758 15:36:03 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:13:02.758 15:36:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:02.758 15:36:03 -- common/autotest_common.sh@10 -- # set +x 00:13:02.758 15:36:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:02.758 15:36:03 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:13:02.758 15:36:03 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:13:02.758 15:36:03 -- common/autotest_common.sh@901 -- # local max=10 00:13:02.758 15:36:03 -- common/autotest_common.sh@902 -- # (( max-- )) 00:13:02.758 15:36:03 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:13:02.758 15:36:03 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:13:02.758 15:36:03 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:13:02.758 15:36:03 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:13:02.758 15:36:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:02.758 15:36:03 -- common/autotest_common.sh@10 -- # set +x 00:13:02.758 15:36:03 -- host/discovery.sh@59 -- # xargs 00:13:02.758 15:36:03 -- host/discovery.sh@59 -- # sort 00:13:02.758 15:36:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:02.758 15:36:04 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:13:02.758 15:36:04 -- common/autotest_common.sh@904 -- # return 0 00:13:02.758 15:36:04 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:13:02.758 15:36:04 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:13:02.758 15:36:04 -- common/autotest_common.sh@901 -- # local max=10 00:13:02.758 15:36:04 -- common/autotest_common.sh@902 -- # (( max-- )) 00:13:02.758 15:36:04 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:13:02.758 15:36:04 -- common/autotest_common.sh@903 -- # get_bdev_list 00:13:02.758 15:36:04 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:13:02.758 15:36:04 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:13:02.758 15:36:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:02.758 15:36:04 -- common/autotest_common.sh@10 -- # set +x 00:13:02.758 15:36:04 -- host/discovery.sh@55 -- # sort 00:13:02.758 15:36:04 -- host/discovery.sh@55 -- # xargs 00:13:02.758 15:36:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:02.758 15:36:04 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:13:02.758 15:36:04 -- common/autotest_common.sh@904 -- # return 0 00:13:02.758 15:36:04 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:13:02.758 15:36:04 -- host/discovery.sh@79 -- # expected_count=2 00:13:02.758 15:36:04 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:13:02.758 15:36:04 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:13:02.758 15:36:04 -- common/autotest_common.sh@901 -- # local max=10 00:13:02.758 15:36:04 -- common/autotest_common.sh@902 -- # (( max-- )) 00:13:02.758 15:36:04 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:13:02.758 15:36:04 -- common/autotest_common.sh@903 -- # get_notification_count 00:13:02.758 15:36:04 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:13:02.758 15:36:04 -- host/discovery.sh@74 -- # jq '. | length' 00:13:02.758 15:36:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:02.758 15:36:04 -- common/autotest_common.sh@10 -- # set +x 00:13:02.758 15:36:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:02.758 15:36:04 -- host/discovery.sh@74 -- # notification_count=2 00:13:02.758 15:36:04 -- host/discovery.sh@75 -- # notify_id=4 00:13:02.758 15:36:04 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:13:02.758 15:36:04 -- common/autotest_common.sh@904 -- # return 0 00:13:02.758 15:36:04 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:13:02.758 15:36:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:02.758 15:36:04 -- common/autotest_common.sh@10 -- # set +x 00:13:04.132 [2024-04-17 15:36:05.188918] bdev_nvme.c:6898:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:13:04.132 [2024-04-17 15:36:05.188963] bdev_nvme.c:6978:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:13:04.132 [2024-04-17 15:36:05.189001] bdev_nvme.c:6861:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:13:04.132 [2024-04-17 15:36:05.194955] bdev_nvme.c:6827:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:13:04.132 [2024-04-17 15:36:05.255373] bdev_nvme.c:6717:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:13:04.132 [2024-04-17 15:36:05.255504] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:13:04.132 15:36:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.132 15:36:05 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:13:04.132 15:36:05 -- common/autotest_common.sh@638 -- # local es=0 00:13:04.132 15:36:05 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:13:04.132 15:36:05 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:13:04.132 15:36:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:04.132 15:36:05 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:13:04.132 15:36:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:04.132 15:36:05 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:13:04.132 15:36:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.132 15:36:05 -- common/autotest_common.sh@10 -- # set +x 00:13:04.132 request: 00:13:04.132 { 00:13:04.132 "name": "nvme", 00:13:04.132 "trtype": "tcp", 00:13:04.132 "traddr": "10.0.0.2", 00:13:04.132 "hostnqn": "nqn.2021-12.io.spdk:test", 00:13:04.132 "adrfam": "ipv4", 00:13:04.132 "trsvcid": "8009", 00:13:04.132 "wait_for_attach": true, 00:13:04.132 "method": "bdev_nvme_start_discovery", 00:13:04.132 "req_id": 1 00:13:04.132 } 00:13:04.132 Got JSON-RPC error response 00:13:04.132 response: 00:13:04.132 { 00:13:04.132 "code": -17, 00:13:04.132 "message": "File exists" 00:13:04.132 } 00:13:04.132 15:36:05 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:13:04.132 15:36:05 -- common/autotest_common.sh@641 -- # es=1 00:13:04.132 15:36:05 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:04.132 15:36:05 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:04.132 15:36:05 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:04.132 15:36:05 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:13:04.132 15:36:05 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:13:04.132 15:36:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.132 15:36:05 -- common/autotest_common.sh@10 -- # set +x 00:13:04.132 15:36:05 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:13:04.132 15:36:05 -- host/discovery.sh@67 -- # sort 00:13:04.132 15:36:05 -- host/discovery.sh@67 -- # xargs 00:13:04.132 15:36:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.132 15:36:05 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:13:04.132 15:36:05 -- host/discovery.sh@146 -- # get_bdev_list 00:13:04.132 15:36:05 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:13:04.132 15:36:05 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:13:04.132 15:36:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.132 15:36:05 -- common/autotest_common.sh@10 -- # set +x 00:13:04.132 15:36:05 -- host/discovery.sh@55 -- # sort 00:13:04.132 15:36:05 -- host/discovery.sh@55 -- # xargs 00:13:04.132 15:36:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.132 15:36:05 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:13:04.132 15:36:05 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:13:04.132 15:36:05 -- common/autotest_common.sh@638 -- # local es=0 00:13:04.132 15:36:05 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:13:04.132 15:36:05 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:13:04.132 15:36:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:04.132 15:36:05 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:13:04.132 15:36:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:04.132 15:36:05 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:13:04.132 15:36:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.132 15:36:05 -- common/autotest_common.sh@10 -- # set +x 00:13:04.132 request: 00:13:04.132 { 00:13:04.132 "name": "nvme_second", 00:13:04.132 "trtype": "tcp", 00:13:04.132 "traddr": "10.0.0.2", 00:13:04.132 "hostnqn": "nqn.2021-12.io.spdk:test", 00:13:04.132 "adrfam": "ipv4", 00:13:04.132 "trsvcid": "8009", 00:13:04.132 "wait_for_attach": true, 00:13:04.132 "method": "bdev_nvme_start_discovery", 00:13:04.132 "req_id": 1 00:13:04.132 } 00:13:04.132 Got JSON-RPC error response 00:13:04.132 response: 00:13:04.132 { 00:13:04.132 "code": -17, 00:13:04.132 "message": "File exists" 00:13:04.132 } 00:13:04.132 15:36:05 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:13:04.132 15:36:05 -- common/autotest_common.sh@641 -- # es=1 00:13:04.132 15:36:05 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:04.132 15:36:05 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:04.132 15:36:05 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:04.132 15:36:05 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:13:04.132 15:36:05 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:13:04.132 15:36:05 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:13:04.132 15:36:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.132 15:36:05 -- common/autotest_common.sh@10 -- # set +x 00:13:04.132 15:36:05 -- host/discovery.sh@67 -- # xargs 00:13:04.132 15:36:05 -- host/discovery.sh@67 -- # sort 00:13:04.132 15:36:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.132 15:36:05 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:13:04.132 15:36:05 -- host/discovery.sh@152 -- # get_bdev_list 00:13:04.132 15:36:05 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:13:04.132 15:36:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.132 15:36:05 -- common/autotest_common.sh@10 -- # set +x 00:13:04.132 15:36:05 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:13:04.132 15:36:05 -- host/discovery.sh@55 -- # sort 00:13:04.132 15:36:05 -- host/discovery.sh@55 -- # xargs 00:13:04.132 15:36:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.132 15:36:05 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:13:04.132 15:36:05 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:13:04.132 15:36:05 -- common/autotest_common.sh@638 -- # local es=0 00:13:04.132 15:36:05 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:13:04.132 15:36:05 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:13:04.132 15:36:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:04.132 15:36:05 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:13:04.132 15:36:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:04.132 15:36:05 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:13:04.132 15:36:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.132 15:36:05 -- common/autotest_common.sh@10 -- # set +x 00:13:05.508 [2024-04-17 15:36:06.528355] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:13:05.508 [2024-04-17 15:36:06.528552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:13:05.508 [2024-04-17 15:36:06.528603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:13:05.508 [2024-04-17 15:36:06.528623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba3d10 with addr=10.0.0.2, port=8010 00:13:05.508 [2024-04-17 15:36:06.528651] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:13:05.508 [2024-04-17 15:36:06.528664] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:13:05.508 [2024-04-17 15:36:06.528674] bdev_nvme.c:6960:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:13:06.441 [2024-04-17 15:36:07.528320] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:13:06.441 [2024-04-17 15:36:07.528469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:13:06.441 [2024-04-17 15:36:07.528519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:13:06.441 [2024-04-17 15:36:07.528538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc38b70 with addr=10.0.0.2, port=8010 00:13:06.441 [2024-04-17 15:36:07.528565] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:13:06.441 [2024-04-17 15:36:07.528577] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:13:06.441 [2024-04-17 15:36:07.528588] bdev_nvme.c:6960:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:13:07.374 request: 00:13:07.374 { 00:13:07.374 "name": "nvme_second", 00:13:07.375 "trtype": "tcp", 00:13:07.375 "traddr": "10.0.0.2", 00:13:07.375 "hostnqn": "nqn.2021-12.io.spdk:test", 00:13:07.375 "adrfam": "ipv4", 00:13:07.375 "trsvcid": "8010", 00:13:07.375 "attach_timeout_ms": 3000, 00:13:07.375 "method": "bdev_nvme_start_discovery", 00:13:07.375 "req_id": 1 00:13:07.375 } 00:13:07.375 [2024-04-17 15:36:08.528165] bdev_nvme.c:6941:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:13:07.375 Got JSON-RPC error response 00:13:07.375 response: 00:13:07.375 { 00:13:07.375 "code": -110, 00:13:07.375 "message": "Connection timed out" 00:13:07.375 } 00:13:07.375 15:36:08 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:13:07.375 15:36:08 -- common/autotest_common.sh@641 -- # es=1 00:13:07.375 15:36:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:07.375 15:36:08 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:07.375 15:36:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:07.375 15:36:08 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:13:07.375 15:36:08 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:13:07.375 15:36:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:07.375 15:36:08 -- common/autotest_common.sh@10 -- # set +x 00:13:07.375 15:36:08 -- host/discovery.sh@67 -- # sort 00:13:07.375 15:36:08 -- host/discovery.sh@67 -- # xargs 00:13:07.375 15:36:08 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:13:07.375 15:36:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:07.375 15:36:08 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:13:07.375 15:36:08 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:13:07.375 15:36:08 -- host/discovery.sh@161 -- # kill 73241 00:13:07.375 15:36:08 -- host/discovery.sh@162 -- # nvmftestfini 00:13:07.375 15:36:08 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:07.375 15:36:08 -- nvmf/common.sh@117 -- # sync 00:13:07.375 15:36:08 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:07.375 15:36:08 -- nvmf/common.sh@120 -- # set +e 00:13:07.375 15:36:08 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:07.375 15:36:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:07.375 rmmod nvme_tcp 00:13:07.375 rmmod nvme_fabrics 00:13:07.375 rmmod nvme_keyring 00:13:07.375 15:36:08 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:07.375 15:36:08 -- nvmf/common.sh@124 -- # set -e 00:13:07.375 15:36:08 -- nvmf/common.sh@125 -- # return 0 00:13:07.375 15:36:08 -- nvmf/common.sh@478 -- # '[' -n 73207 ']' 00:13:07.375 15:36:08 -- nvmf/common.sh@479 -- # killprocess 73207 00:13:07.375 15:36:08 -- common/autotest_common.sh@936 -- # '[' -z 73207 ']' 00:13:07.375 15:36:08 -- common/autotest_common.sh@940 -- # kill -0 73207 00:13:07.375 15:36:08 -- common/autotest_common.sh@941 -- # uname 00:13:07.375 15:36:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:07.375 15:36:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73207 00:13:07.375 15:36:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:07.375 killing process with pid 73207 00:13:07.375 15:36:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:07.375 15:36:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73207' 00:13:07.375 15:36:08 -- common/autotest_common.sh@955 -- # kill 73207 00:13:07.375 15:36:08 -- common/autotest_common.sh@960 -- # wait 73207 00:13:07.941 15:36:09 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:07.941 15:36:09 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:07.941 15:36:09 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:07.941 15:36:09 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:07.941 15:36:09 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:07.941 15:36:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.941 15:36:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:07.941 15:36:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.941 15:36:09 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:07.941 00:13:07.941 real 0m10.241s 00:13:07.941 user 0m19.403s 00:13:07.941 sys 0m2.166s 00:13:07.941 15:36:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:07.941 ************************************ 00:13:07.941 END TEST nvmf_discovery 00:13:07.941 15:36:09 -- common/autotest_common.sh@10 -- # set +x 00:13:07.941 ************************************ 00:13:07.941 15:36:09 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:13:07.941 15:36:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:07.941 15:36:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:07.941 15:36:09 -- common/autotest_common.sh@10 -- # set +x 00:13:07.941 ************************************ 00:13:07.941 START TEST nvmf_discovery_remove_ifc 00:13:07.941 ************************************ 00:13:07.941 15:36:09 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:13:07.941 * Looking for test storage... 00:13:07.941 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:13:07.941 15:36:09 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:07.941 15:36:09 -- nvmf/common.sh@7 -- # uname -s 00:13:07.941 15:36:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:07.941 15:36:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:07.941 15:36:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:07.941 15:36:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:07.941 15:36:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:07.941 15:36:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:07.941 15:36:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:07.941 15:36:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:07.941 15:36:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:07.941 15:36:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:07.941 15:36:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02dfa913-00e4-4a25-ab2c-855f7283d4db 00:13:07.941 15:36:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=02dfa913-00e4-4a25-ab2c-855f7283d4db 00:13:07.941 15:36:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:07.941 15:36:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:07.941 15:36:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:07.941 15:36:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:07.941 15:36:09 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:07.941 15:36:09 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:07.941 15:36:09 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:07.941 15:36:09 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:07.941 15:36:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.941 15:36:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.941 15:36:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.941 15:36:09 -- paths/export.sh@5 -- # export PATH 00:13:07.941 15:36:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.941 15:36:09 -- nvmf/common.sh@47 -- # : 0 00:13:07.941 15:36:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:07.941 15:36:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:07.941 15:36:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:07.941 15:36:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:07.941 15:36:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:07.941 15:36:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:07.941 15:36:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:07.941 15:36:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:07.941 15:36:09 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:13:07.941 15:36:09 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:13:07.941 15:36:09 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:13:07.941 15:36:09 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:07.941 15:36:09 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:13:07.941 15:36:09 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:13:07.941 15:36:09 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:13:07.941 15:36:09 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:07.941 15:36:09 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:07.941 15:36:09 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:08.199 15:36:09 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:08.200 15:36:09 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:08.200 15:36:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.200 15:36:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:08.200 15:36:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.200 15:36:09 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:13:08.200 15:36:09 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:13:08.200 15:36:09 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:13:08.200 15:36:09 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:13:08.200 15:36:09 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:13:08.200 15:36:09 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:13:08.200 15:36:09 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:08.200 15:36:09 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:08.200 15:36:09 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:08.200 15:36:09 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:08.200 15:36:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:08.200 15:36:09 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:08.200 15:36:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:08.200 15:36:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:08.200 15:36:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:08.200 15:36:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:08.200 15:36:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:08.200 15:36:09 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:08.200 15:36:09 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:08.200 15:36:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:08.200 Cannot find device "nvmf_tgt_br" 00:13:08.200 15:36:09 -- nvmf/common.sh@155 -- # true 00:13:08.200 15:36:09 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:08.200 Cannot find device "nvmf_tgt_br2" 00:13:08.200 15:36:09 -- nvmf/common.sh@156 -- # true 00:13:08.200 15:36:09 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:08.200 15:36:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:08.200 Cannot find device "nvmf_tgt_br" 00:13:08.200 15:36:09 -- nvmf/common.sh@158 -- # true 00:13:08.200 15:36:09 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:08.200 Cannot find device "nvmf_tgt_br2" 00:13:08.200 15:36:09 -- nvmf/common.sh@159 -- # true 00:13:08.200 15:36:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:08.200 15:36:09 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:08.200 15:36:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:08.200 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:08.200 15:36:09 -- nvmf/common.sh@162 -- # true 00:13:08.200 15:36:09 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:08.200 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:08.200 15:36:09 -- nvmf/common.sh@163 -- # true 00:13:08.200 15:36:09 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:08.200 15:36:09 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:08.200 15:36:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:08.200 15:36:09 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:08.200 15:36:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:08.200 15:36:09 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:08.200 15:36:09 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:08.200 15:36:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:08.200 15:36:09 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:08.200 15:36:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:08.200 15:36:09 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:08.200 15:36:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:08.200 15:36:09 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:08.200 15:36:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:08.200 15:36:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:08.200 15:36:09 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:08.200 15:36:09 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:08.200 15:36:09 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:08.458 15:36:09 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:08.458 15:36:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:08.458 15:36:09 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:08.458 15:36:09 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:08.458 15:36:09 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:08.458 15:36:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:08.458 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:08.458 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:13:08.458 00:13:08.458 --- 10.0.0.2 ping statistics --- 00:13:08.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.458 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:13:08.458 15:36:09 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:08.458 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:08.458 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:13:08.458 00:13:08.458 --- 10.0.0.3 ping statistics --- 00:13:08.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.458 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:13:08.458 15:36:09 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:08.458 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:08.458 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:13:08.458 00:13:08.458 --- 10.0.0.1 ping statistics --- 00:13:08.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.458 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:13:08.458 15:36:09 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:08.458 15:36:09 -- nvmf/common.sh@422 -- # return 0 00:13:08.458 15:36:09 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:08.458 15:36:09 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:08.458 15:36:09 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:08.458 15:36:09 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:08.458 15:36:09 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:08.458 15:36:09 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:08.458 15:36:09 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:08.458 15:36:09 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:13:08.458 15:36:09 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:08.458 15:36:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:08.458 15:36:09 -- common/autotest_common.sh@10 -- # set +x 00:13:08.458 15:36:09 -- nvmf/common.sh@470 -- # nvmfpid=73701 00:13:08.458 15:36:09 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:08.458 15:36:09 -- nvmf/common.sh@471 -- # waitforlisten 73701 00:13:08.458 15:36:09 -- common/autotest_common.sh@817 -- # '[' -z 73701 ']' 00:13:08.458 15:36:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.458 15:36:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:08.458 15:36:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.458 15:36:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:08.458 15:36:09 -- common/autotest_common.sh@10 -- # set +x 00:13:08.458 [2024-04-17 15:36:09.779200] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:13:08.458 [2024-04-17 15:36:09.779321] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:08.716 [2024-04-17 15:36:09.916347] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.716 [2024-04-17 15:36:10.073379] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:08.716 [2024-04-17 15:36:10.073462] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:08.716 [2024-04-17 15:36:10.073494] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:08.716 [2024-04-17 15:36:10.073504] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:08.716 [2024-04-17 15:36:10.073513] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:08.716 [2024-04-17 15:36:10.073551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:09.650 15:36:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:09.650 15:36:10 -- common/autotest_common.sh@850 -- # return 0 00:13:09.650 15:36:10 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:09.650 15:36:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:09.650 15:36:10 -- common/autotest_common.sh@10 -- # set +x 00:13:09.650 15:36:10 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:09.650 15:36:10 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:13:09.650 15:36:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:09.650 15:36:10 -- common/autotest_common.sh@10 -- # set +x 00:13:09.650 [2024-04-17 15:36:10.872521] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:09.650 [2024-04-17 15:36:10.880589] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:13:09.650 null0 00:13:09.650 [2024-04-17 15:36:10.912603] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:09.650 15:36:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:09.650 15:36:10 -- host/discovery_remove_ifc.sh@59 -- # hostpid=73733 00:13:09.650 15:36:10 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:13:09.650 15:36:10 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 73733 /tmp/host.sock 00:13:09.650 15:36:10 -- common/autotest_common.sh@817 -- # '[' -z 73733 ']' 00:13:09.650 15:36:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:13:09.650 15:36:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:09.650 15:36:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:13:09.650 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:13:09.650 15:36:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:09.650 15:36:10 -- common/autotest_common.sh@10 -- # set +x 00:13:09.650 [2024-04-17 15:36:10.983172] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:13:09.650 [2024-04-17 15:36:10.983264] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73733 ] 00:13:09.907 [2024-04-17 15:36:11.118758] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.907 [2024-04-17 15:36:11.268263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.839 15:36:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:10.839 15:36:11 -- common/autotest_common.sh@850 -- # return 0 00:13:10.839 15:36:11 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:10.839 15:36:11 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:13:10.839 15:36:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:10.839 15:36:11 -- common/autotest_common.sh@10 -- # set +x 00:13:10.839 15:36:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:10.839 15:36:11 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:13:10.839 15:36:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:10.839 15:36:11 -- common/autotest_common.sh@10 -- # set +x 00:13:10.839 15:36:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:10.839 15:36:12 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:13:10.839 15:36:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:10.839 15:36:12 -- common/autotest_common.sh@10 -- # set +x 00:13:11.771 [2024-04-17 15:36:13.106244] bdev_nvme.c:6898:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:13:11.771 [2024-04-17 15:36:13.106295] bdev_nvme.c:6978:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:13:11.771 [2024-04-17 15:36:13.106316] bdev_nvme.c:6861:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:13:11.771 [2024-04-17 15:36:13.112302] bdev_nvme.c:6827:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:13:11.771 [2024-04-17 15:36:13.169377] bdev_nvme.c:7688:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:13:11.771 [2024-04-17 15:36:13.169471] bdev_nvme.c:7688:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:13:11.771 [2024-04-17 15:36:13.169504] bdev_nvme.c:7688:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:13:11.771 [2024-04-17 15:36:13.169525] bdev_nvme.c:6717:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:13:11.771 [2024-04-17 15:36:13.169557] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:13:11.771 15:36:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:11.771 15:36:13 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:13:11.771 15:36:13 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:13:11.771 [2024-04-17 15:36:13.174665] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1efc090 was disconnected and freed. delete nvme_qpair. 00:13:11.771 15:36:13 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:13:11.771 15:36:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:11.771 15:36:13 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:13:11.771 15:36:13 -- common/autotest_common.sh@10 -- # set +x 00:13:11.772 15:36:13 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:13:11.772 15:36:13 -- host/discovery_remove_ifc.sh@29 -- # sort 00:13:11.772 15:36:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:12.030 15:36:13 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:13:12.030 15:36:13 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:13:12.030 15:36:13 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:13:12.030 15:36:13 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:13:12.030 15:36:13 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:13:12.030 15:36:13 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:13:12.030 15:36:13 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:13:12.030 15:36:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:12.030 15:36:13 -- common/autotest_common.sh@10 -- # set +x 00:13:12.030 15:36:13 -- host/discovery_remove_ifc.sh@29 -- # sort 00:13:12.030 15:36:13 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:13:12.030 15:36:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:12.030 15:36:13 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:13:12.030 15:36:13 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:13:12.966 15:36:14 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:13:12.966 15:36:14 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:13:12.966 15:36:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:12.966 15:36:14 -- common/autotest_common.sh@10 -- # set +x 00:13:12.966 15:36:14 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:13:12.966 15:36:14 -- host/discovery_remove_ifc.sh@29 -- # sort 00:13:12.966 15:36:14 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:13:12.966 15:36:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:12.966 15:36:14 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:13:12.966 15:36:14 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:13:14.351 15:36:15 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:13:14.351 15:36:15 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:13:14.351 15:36:15 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:13:14.351 15:36:15 -- host/discovery_remove_ifc.sh@29 -- # sort 00:13:14.351 15:36:15 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:13:14.351 15:36:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:14.351 15:36:15 -- common/autotest_common.sh@10 -- # set +x 00:13:14.351 15:36:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:14.351 15:36:15 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:13:14.351 15:36:15 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:13:15.285 15:36:16 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:13:15.285 15:36:16 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:13:15.285 15:36:16 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:13:15.285 15:36:16 -- host/discovery_remove_ifc.sh@29 -- # sort 00:13:15.285 15:36:16 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:13:15.285 15:36:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:15.285 15:36:16 -- common/autotest_common.sh@10 -- # set +x 00:13:15.285 15:36:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:15.285 15:36:16 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:13:15.285 15:36:16 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:13:16.236 15:36:17 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:13:16.236 15:36:17 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:13:16.236 15:36:17 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:13:16.236 15:36:17 -- host/discovery_remove_ifc.sh@29 -- # sort 00:13:16.236 15:36:17 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:13:16.236 15:36:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:16.236 15:36:17 -- common/autotest_common.sh@10 -- # set +x 00:13:16.236 15:36:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:16.236 15:36:17 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:13:16.236 15:36:17 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:13:17.169 15:36:18 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:13:17.169 15:36:18 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:13:17.169 15:36:18 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:13:17.169 15:36:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:17.169 15:36:18 -- common/autotest_common.sh@10 -- # set +x 00:13:17.169 15:36:18 -- host/discovery_remove_ifc.sh@29 -- # sort 00:13:17.169 15:36:18 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:13:17.169 15:36:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:17.169 [2024-04-17 15:36:18.596422] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:13:17.169 [2024-04-17 15:36:18.596499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:17.169 [2024-04-17 15:36:18.596518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:17.169 [2024-04-17 15:36:18.596533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:17.169 [2024-04-17 15:36:18.596543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:17.169 [2024-04-17 15:36:18.596553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:17.169 [2024-04-17 15:36:18.596563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:17.169 [2024-04-17 15:36:18.596574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:17.169 [2024-04-17 15:36:18.596583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:17.169 [2024-04-17 15:36:18.596593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:13:17.169 [2024-04-17 15:36:18.596602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:17.169 [2024-04-17 15:36:18.596612] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6af70 is same with the state(5) to be set 00:13:17.169 [2024-04-17 15:36:18.606416] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e6af70 (9): Bad file descriptor 00:13:17.169 15:36:18 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:13:17.169 15:36:18 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:13:17.427 [2024-04-17 15:36:18.616440] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:18.401 15:36:19 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:13:18.401 15:36:19 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:13:18.401 15:36:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:18.401 15:36:19 -- common/autotest_common.sh@10 -- # set +x 00:13:18.401 15:36:19 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:13:18.401 15:36:19 -- host/discovery_remove_ifc.sh@29 -- # sort 00:13:18.401 15:36:19 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:13:18.401 [2024-04-17 15:36:19.675912] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:13:19.336 [2024-04-17 15:36:20.699900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:13:20.714 [2024-04-17 15:36:21.723904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:13:20.714 [2024-04-17 15:36:21.724397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e6af70 with addr=10.0.0.2, port=4420 00:13:20.714 [2024-04-17 15:36:21.724453] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6af70 is same with the state(5) to be set 00:13:20.714 [2024-04-17 15:36:21.725411] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e6af70 (9): Bad file descriptor 00:13:20.714 [2024-04-17 15:36:21.725478] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:13:20.714 [2024-04-17 15:36:21.725532] bdev_nvme.c:6649:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:13:20.714 [2024-04-17 15:36:21.725606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:20.714 [2024-04-17 15:36:21.725646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:20.714 [2024-04-17 15:36:21.725674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:20.714 [2024-04-17 15:36:21.725695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:20.714 [2024-04-17 15:36:21.725719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:20.714 [2024-04-17 15:36:21.725739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:20.714 [2024-04-17 15:36:21.725789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:20.714 [2024-04-17 15:36:21.725812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:20.714 [2024-04-17 15:36:21.725834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:13:20.714 [2024-04-17 15:36:21.725855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:20.714 [2024-04-17 15:36:21.725877] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:13:20.714 [2024-04-17 15:36:21.725941] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e6a830 (9): Bad file descriptor 00:13:20.714 [2024-04-17 15:36:21.726940] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:13:20.714 [2024-04-17 15:36:21.726991] nvme_ctrlr.c:1148:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:13:20.714 15:36:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:20.714 15:36:21 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:13:20.714 15:36:21 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:13:21.650 15:36:22 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:13:21.650 15:36:22 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:13:21.650 15:36:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:21.650 15:36:22 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:13:21.650 15:36:22 -- host/discovery_remove_ifc.sh@29 -- # sort 00:13:21.650 15:36:22 -- common/autotest_common.sh@10 -- # set +x 00:13:21.650 15:36:22 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:13:21.650 15:36:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:21.650 15:36:22 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:13:21.650 15:36:22 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:21.650 15:36:22 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:21.650 15:36:22 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:13:21.650 15:36:22 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:13:21.650 15:36:22 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:13:21.650 15:36:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:21.650 15:36:22 -- common/autotest_common.sh@10 -- # set +x 00:13:21.650 15:36:22 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:13:21.650 15:36:22 -- host/discovery_remove_ifc.sh@29 -- # sort 00:13:21.650 15:36:22 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:13:21.650 15:36:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:21.650 15:36:22 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:13:21.650 15:36:22 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:13:22.610 [2024-04-17 15:36:23.732257] bdev_nvme.c:6898:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:13:22.610 [2024-04-17 15:36:23.732302] bdev_nvme.c:6978:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:13:22.610 [2024-04-17 15:36:23.732323] bdev_nvme.c:6861:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:13:22.610 [2024-04-17 15:36:23.738300] bdev_nvme.c:6827:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:13:22.610 [2024-04-17 15:36:23.793995] bdev_nvme.c:7688:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:13:22.610 [2024-04-17 15:36:23.794231] bdev_nvme.c:7688:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:13:22.610 [2024-04-17 15:36:23.794300] bdev_nvme.c:7688:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:13:22.610 [2024-04-17 15:36:23.794414] bdev_nvme.c:6717:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:13:22.610 [2024-04-17 15:36:23.794554] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:13:22.610 [2024-04-17 15:36:23.800644] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1f09560 was disconnected and freed. delete nvme_qpair. 00:13:22.610 15:36:23 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:13:22.610 15:36:23 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:13:22.610 15:36:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:22.610 15:36:23 -- common/autotest_common.sh@10 -- # set +x 00:13:22.610 15:36:23 -- host/discovery_remove_ifc.sh@29 -- # sort 00:13:22.610 15:36:23 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:13:22.610 15:36:23 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:13:22.610 15:36:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:22.610 15:36:23 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:13:22.610 15:36:23 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:13:22.610 15:36:23 -- host/discovery_remove_ifc.sh@90 -- # killprocess 73733 00:13:22.610 15:36:23 -- common/autotest_common.sh@936 -- # '[' -z 73733 ']' 00:13:22.610 15:36:23 -- common/autotest_common.sh@940 -- # kill -0 73733 00:13:22.610 15:36:23 -- common/autotest_common.sh@941 -- # uname 00:13:22.610 15:36:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:22.610 15:36:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73733 00:13:22.610 killing process with pid 73733 00:13:22.610 15:36:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:22.610 15:36:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:22.610 15:36:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73733' 00:13:22.610 15:36:23 -- common/autotest_common.sh@955 -- # kill 73733 00:13:22.610 15:36:23 -- common/autotest_common.sh@960 -- # wait 73733 00:13:23.177 15:36:24 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:13:23.177 15:36:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:23.177 15:36:24 -- nvmf/common.sh@117 -- # sync 00:13:23.177 15:36:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:23.177 15:36:24 -- nvmf/common.sh@120 -- # set +e 00:13:23.177 15:36:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:23.177 15:36:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:23.177 rmmod nvme_tcp 00:13:23.177 rmmod nvme_fabrics 00:13:23.177 rmmod nvme_keyring 00:13:23.177 15:36:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:23.177 15:36:24 -- nvmf/common.sh@124 -- # set -e 00:13:23.177 15:36:24 -- nvmf/common.sh@125 -- # return 0 00:13:23.177 15:36:24 -- nvmf/common.sh@478 -- # '[' -n 73701 ']' 00:13:23.177 15:36:24 -- nvmf/common.sh@479 -- # killprocess 73701 00:13:23.177 15:36:24 -- common/autotest_common.sh@936 -- # '[' -z 73701 ']' 00:13:23.177 15:36:24 -- common/autotest_common.sh@940 -- # kill -0 73701 00:13:23.177 15:36:24 -- common/autotest_common.sh@941 -- # uname 00:13:23.177 15:36:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:23.177 15:36:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73701 00:13:23.177 killing process with pid 73701 00:13:23.177 15:36:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:23.177 15:36:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:23.177 15:36:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73701' 00:13:23.177 15:36:24 -- common/autotest_common.sh@955 -- # kill 73701 00:13:23.177 15:36:24 -- common/autotest_common.sh@960 -- # wait 73701 00:13:23.435 15:36:24 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:23.435 15:36:24 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:23.435 15:36:24 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:23.435 15:36:24 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:23.435 15:36:24 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:23.435 15:36:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.435 15:36:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:23.435 15:36:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.435 15:36:24 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:23.435 00:13:23.435 real 0m15.549s 00:13:23.435 user 0m24.565s 00:13:23.435 sys 0m2.929s 00:13:23.435 ************************************ 00:13:23.435 END TEST nvmf_discovery_remove_ifc 00:13:23.435 ************************************ 00:13:23.435 15:36:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:23.435 15:36:24 -- common/autotest_common.sh@10 -- # set +x 00:13:23.435 15:36:24 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:13:23.435 15:36:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:23.435 15:36:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:23.435 15:36:24 -- common/autotest_common.sh@10 -- # set +x 00:13:23.693 ************************************ 00:13:23.693 START TEST nvmf_identify_kernel_target 00:13:23.693 ************************************ 00:13:23.693 15:36:24 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:13:23.693 * Looking for test storage... 00:13:23.693 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:13:23.693 15:36:25 -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:23.693 15:36:25 -- nvmf/common.sh@7 -- # uname -s 00:13:23.693 15:36:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:23.693 15:36:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:23.693 15:36:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:23.693 15:36:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:23.693 15:36:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:23.693 15:36:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:23.693 15:36:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:23.693 15:36:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:23.693 15:36:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:23.693 15:36:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:23.693 15:36:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02dfa913-00e4-4a25-ab2c-855f7283d4db 00:13:23.693 15:36:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=02dfa913-00e4-4a25-ab2c-855f7283d4db 00:13:23.693 15:36:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:23.693 15:36:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:23.693 15:36:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:23.693 15:36:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:23.693 15:36:25 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:23.693 15:36:25 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:23.693 15:36:25 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:23.693 15:36:25 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:23.693 15:36:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.694 15:36:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.694 15:36:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.694 15:36:25 -- paths/export.sh@5 -- # export PATH 00:13:23.694 15:36:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.694 15:36:25 -- nvmf/common.sh@47 -- # : 0 00:13:23.694 15:36:25 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:23.694 15:36:25 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:23.694 15:36:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:23.694 15:36:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:23.694 15:36:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:23.694 15:36:25 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:23.694 15:36:25 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:23.694 15:36:25 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:23.694 15:36:25 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:13:23.694 15:36:25 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:23.694 15:36:25 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:23.694 15:36:25 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:23.694 15:36:25 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:23.694 15:36:25 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:23.694 15:36:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.694 15:36:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:23.694 15:36:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.694 15:36:25 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:13:23.694 15:36:25 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:13:23.694 15:36:25 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:13:23.694 15:36:25 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:13:23.694 15:36:25 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:13:23.694 15:36:25 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:13:23.694 15:36:25 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:23.694 15:36:25 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:23.694 15:36:25 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:23.694 15:36:25 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:23.694 15:36:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:23.694 15:36:25 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:23.694 15:36:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:23.694 15:36:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:23.694 15:36:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:23.694 15:36:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:23.694 15:36:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:23.694 15:36:25 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:23.694 15:36:25 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:23.694 15:36:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:23.694 Cannot find device "nvmf_tgt_br" 00:13:23.694 15:36:25 -- nvmf/common.sh@155 -- # true 00:13:23.694 15:36:25 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:23.694 Cannot find device "nvmf_tgt_br2" 00:13:23.694 15:36:25 -- nvmf/common.sh@156 -- # true 00:13:23.694 15:36:25 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:23.694 15:36:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:23.694 Cannot find device "nvmf_tgt_br" 00:13:23.694 15:36:25 -- nvmf/common.sh@158 -- # true 00:13:23.694 15:36:25 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:23.694 Cannot find device "nvmf_tgt_br2" 00:13:23.694 15:36:25 -- nvmf/common.sh@159 -- # true 00:13:23.694 15:36:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:23.953 15:36:25 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:23.953 15:36:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:23.953 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:23.953 15:36:25 -- nvmf/common.sh@162 -- # true 00:13:23.953 15:36:25 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:23.953 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:23.953 15:36:25 -- nvmf/common.sh@163 -- # true 00:13:23.953 15:36:25 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:23.953 15:36:25 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:23.953 15:36:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:23.953 15:36:25 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:23.953 15:36:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:23.953 15:36:25 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:23.953 15:36:25 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:23.953 15:36:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:23.953 15:36:25 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:23.953 15:36:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:23.953 15:36:25 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:23.953 15:36:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:23.953 15:36:25 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:23.953 15:36:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:23.953 15:36:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:23.953 15:36:25 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:23.953 15:36:25 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:23.953 15:36:25 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:23.953 15:36:25 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:23.953 15:36:25 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:23.953 15:36:25 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:23.953 15:36:25 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:23.953 15:36:25 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:23.953 15:36:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:23.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:23.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:13:23.953 00:13:23.953 --- 10.0.0.2 ping statistics --- 00:13:23.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.953 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:13:23.953 15:36:25 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:23.953 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:23.953 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:13:23.953 00:13:23.953 --- 10.0.0.3 ping statistics --- 00:13:23.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.953 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:13:23.953 15:36:25 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:23.953 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:23.953 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:13:23.953 00:13:23.953 --- 10.0.0.1 ping statistics --- 00:13:23.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.953 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:13:23.953 15:36:25 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:23.953 15:36:25 -- nvmf/common.sh@422 -- # return 0 00:13:23.953 15:36:25 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:23.953 15:36:25 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:23.953 15:36:25 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:23.953 15:36:25 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:23.953 15:36:25 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:23.953 15:36:25 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:23.953 15:36:25 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:23.953 15:36:25 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:13:23.953 15:36:25 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:13:24.211 15:36:25 -- nvmf/common.sh@717 -- # local ip 00:13:24.211 15:36:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:24.211 15:36:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:24.211 15:36:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:24.211 15:36:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:24.211 15:36:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:24.211 15:36:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:24.211 15:36:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:24.211 15:36:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:24.211 15:36:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:24.211 15:36:25 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:13:24.211 15:36:25 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:13:24.211 15:36:25 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:13:24.211 15:36:25 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:13:24.211 15:36:25 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:13:24.211 15:36:25 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:13:24.211 15:36:25 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:13:24.211 15:36:25 -- nvmf/common.sh@628 -- # local block nvme 00:13:24.211 15:36:25 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:13:24.211 15:36:25 -- nvmf/common.sh@631 -- # modprobe nvmet 00:13:24.211 15:36:25 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:13:24.211 15:36:25 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:24.469 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:24.469 Waiting for block devices as requested 00:13:24.469 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:24.727 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:24.727 15:36:25 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:13:24.727 15:36:25 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:24.727 15:36:25 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:13:24.727 15:36:25 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:13:24.727 15:36:25 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:24.727 15:36:25 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:24.727 15:36:25 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:13:24.727 15:36:25 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:13:24.727 15:36:25 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:13:24.727 No valid GPT data, bailing 00:13:24.727 15:36:26 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:13:24.727 15:36:26 -- scripts/common.sh@391 -- # pt= 00:13:24.727 15:36:26 -- scripts/common.sh@392 -- # return 1 00:13:24.727 15:36:26 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:13:24.727 15:36:26 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:13:24.727 15:36:26 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:13:24.727 15:36:26 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:13:24.727 15:36:26 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:13:24.727 15:36:26 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:13:24.727 15:36:26 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:24.727 15:36:26 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:13:24.727 15:36:26 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:13:24.727 15:36:26 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:13:24.727 No valid GPT data, bailing 00:13:24.727 15:36:26 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:13:24.727 15:36:26 -- scripts/common.sh@391 -- # pt= 00:13:24.727 15:36:26 -- scripts/common.sh@392 -- # return 1 00:13:24.727 15:36:26 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:13:24.727 15:36:26 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:13:24.727 15:36:26 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:13:24.727 15:36:26 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:13:24.727 15:36:26 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:13:24.727 15:36:26 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:13:24.727 15:36:26 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:24.727 15:36:26 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:13:24.727 15:36:26 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:13:24.727 15:36:26 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:13:24.986 No valid GPT data, bailing 00:13:24.986 15:36:26 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:13:24.986 15:36:26 -- scripts/common.sh@391 -- # pt= 00:13:24.986 15:36:26 -- scripts/common.sh@392 -- # return 1 00:13:24.986 15:36:26 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:13:24.986 15:36:26 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:13:24.986 15:36:26 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:13:24.986 15:36:26 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:13:24.986 15:36:26 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:13:24.986 15:36:26 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:24.986 15:36:26 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:24.986 15:36:26 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:13:24.986 15:36:26 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:13:24.986 15:36:26 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:13:24.986 No valid GPT data, bailing 00:13:24.986 15:36:26 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:13:24.986 15:36:26 -- scripts/common.sh@391 -- # pt= 00:13:24.986 15:36:26 -- scripts/common.sh@392 -- # return 1 00:13:24.986 15:36:26 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:13:24.986 15:36:26 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:13:24.986 15:36:26 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:13:24.986 15:36:26 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:13:24.986 15:36:26 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:13:24.986 15:36:26 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:13:24.986 15:36:26 -- nvmf/common.sh@656 -- # echo 1 00:13:24.986 15:36:26 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:13:24.986 15:36:26 -- nvmf/common.sh@658 -- # echo 1 00:13:24.986 15:36:26 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:13:24.986 15:36:26 -- nvmf/common.sh@661 -- # echo tcp 00:13:24.986 15:36:26 -- nvmf/common.sh@662 -- # echo 4420 00:13:24.986 15:36:26 -- nvmf/common.sh@663 -- # echo ipv4 00:13:24.986 15:36:26 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:13:24.986 15:36:26 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02dfa913-00e4-4a25-ab2c-855f7283d4db --hostid=02dfa913-00e4-4a25-ab2c-855f7283d4db -a 10.0.0.1 -t tcp -s 4420 00:13:24.986 00:13:24.986 Discovery Log Number of Records 2, Generation counter 2 00:13:24.986 =====Discovery Log Entry 0====== 00:13:24.986 trtype: tcp 00:13:24.986 adrfam: ipv4 00:13:24.986 subtype: current discovery subsystem 00:13:24.986 treq: not specified, sq flow control disable supported 00:13:24.986 portid: 1 00:13:24.986 trsvcid: 4420 00:13:24.986 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:24.986 traddr: 10.0.0.1 00:13:24.986 eflags: none 00:13:24.986 sectype: none 00:13:24.986 =====Discovery Log Entry 1====== 00:13:24.986 trtype: tcp 00:13:24.986 adrfam: ipv4 00:13:24.986 subtype: nvme subsystem 00:13:24.986 treq: not specified, sq flow control disable supported 00:13:24.986 portid: 1 00:13:24.986 trsvcid: 4420 00:13:24.986 subnqn: nqn.2016-06.io.spdk:testnqn 00:13:24.986 traddr: 10.0.0.1 00:13:24.986 eflags: none 00:13:24.986 sectype: none 00:13:24.986 15:36:26 -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:13:24.986 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:13:25.245 ===================================================== 00:13:25.245 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:13:25.245 ===================================================== 00:13:25.245 Controller Capabilities/Features 00:13:25.245 ================================ 00:13:25.245 Vendor ID: 0000 00:13:25.245 Subsystem Vendor ID: 0000 00:13:25.245 Serial Number: 69fd8df66ec291f9ecca 00:13:25.245 Model Number: Linux 00:13:25.245 Firmware Version: 6.7.0-68 00:13:25.245 Recommended Arb Burst: 0 00:13:25.245 IEEE OUI Identifier: 00 00 00 00:13:25.245 Multi-path I/O 00:13:25.245 May have multiple subsystem ports: No 00:13:25.245 May have multiple controllers: No 00:13:25.245 Associated with SR-IOV VF: No 00:13:25.245 Max Data Transfer Size: Unlimited 00:13:25.245 Max Number of Namespaces: 0 00:13:25.245 Max Number of I/O Queues: 1024 00:13:25.245 NVMe Specification Version (VS): 1.3 00:13:25.245 NVMe Specification Version (Identify): 1.3 00:13:25.245 Maximum Queue Entries: 1024 00:13:25.245 Contiguous Queues Required: No 00:13:25.245 Arbitration Mechanisms Supported 00:13:25.245 Weighted Round Robin: Not Supported 00:13:25.245 Vendor Specific: Not Supported 00:13:25.245 Reset Timeout: 7500 ms 00:13:25.245 Doorbell Stride: 4 bytes 00:13:25.245 NVM Subsystem Reset: Not Supported 00:13:25.245 Command Sets Supported 00:13:25.245 NVM Command Set: Supported 00:13:25.245 Boot Partition: Not Supported 00:13:25.245 Memory Page Size Minimum: 4096 bytes 00:13:25.245 Memory Page Size Maximum: 4096 bytes 00:13:25.245 Persistent Memory Region: Not Supported 00:13:25.245 Optional Asynchronous Events Supported 00:13:25.245 Namespace Attribute Notices: Not Supported 00:13:25.245 Firmware Activation Notices: Not Supported 00:13:25.245 ANA Change Notices: Not Supported 00:13:25.245 PLE Aggregate Log Change Notices: Not Supported 00:13:25.245 LBA Status Info Alert Notices: Not Supported 00:13:25.245 EGE Aggregate Log Change Notices: Not Supported 00:13:25.245 Normal NVM Subsystem Shutdown event: Not Supported 00:13:25.245 Zone Descriptor Change Notices: Not Supported 00:13:25.245 Discovery Log Change Notices: Supported 00:13:25.245 Controller Attributes 00:13:25.245 128-bit Host Identifier: Not Supported 00:13:25.245 Non-Operational Permissive Mode: Not Supported 00:13:25.245 NVM Sets: Not Supported 00:13:25.246 Read Recovery Levels: Not Supported 00:13:25.246 Endurance Groups: Not Supported 00:13:25.246 Predictable Latency Mode: Not Supported 00:13:25.246 Traffic Based Keep ALive: Not Supported 00:13:25.246 Namespace Granularity: Not Supported 00:13:25.246 SQ Associations: Not Supported 00:13:25.246 UUID List: Not Supported 00:13:25.246 Multi-Domain Subsystem: Not Supported 00:13:25.246 Fixed Capacity Management: Not Supported 00:13:25.246 Variable Capacity Management: Not Supported 00:13:25.246 Delete Endurance Group: Not Supported 00:13:25.246 Delete NVM Set: Not Supported 00:13:25.246 Extended LBA Formats Supported: Not Supported 00:13:25.246 Flexible Data Placement Supported: Not Supported 00:13:25.246 00:13:25.246 Controller Memory Buffer Support 00:13:25.246 ================================ 00:13:25.246 Supported: No 00:13:25.246 00:13:25.246 Persistent Memory Region Support 00:13:25.246 ================================ 00:13:25.246 Supported: No 00:13:25.246 00:13:25.246 Admin Command Set Attributes 00:13:25.246 ============================ 00:13:25.246 Security Send/Receive: Not Supported 00:13:25.246 Format NVM: Not Supported 00:13:25.246 Firmware Activate/Download: Not Supported 00:13:25.246 Namespace Management: Not Supported 00:13:25.246 Device Self-Test: Not Supported 00:13:25.246 Directives: Not Supported 00:13:25.246 NVMe-MI: Not Supported 00:13:25.246 Virtualization Management: Not Supported 00:13:25.246 Doorbell Buffer Config: Not Supported 00:13:25.246 Get LBA Status Capability: Not Supported 00:13:25.246 Command & Feature Lockdown Capability: Not Supported 00:13:25.246 Abort Command Limit: 1 00:13:25.246 Async Event Request Limit: 1 00:13:25.246 Number of Firmware Slots: N/A 00:13:25.246 Firmware Slot 1 Read-Only: N/A 00:13:25.246 Firmware Activation Without Reset: N/A 00:13:25.246 Multiple Update Detection Support: N/A 00:13:25.246 Firmware Update Granularity: No Information Provided 00:13:25.246 Per-Namespace SMART Log: No 00:13:25.246 Asymmetric Namespace Access Log Page: Not Supported 00:13:25.246 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:13:25.246 Command Effects Log Page: Not Supported 00:13:25.246 Get Log Page Extended Data: Supported 00:13:25.246 Telemetry Log Pages: Not Supported 00:13:25.246 Persistent Event Log Pages: Not Supported 00:13:25.246 Supported Log Pages Log Page: May Support 00:13:25.246 Commands Supported & Effects Log Page: Not Supported 00:13:25.246 Feature Identifiers & Effects Log Page:May Support 00:13:25.246 NVMe-MI Commands & Effects Log Page: May Support 00:13:25.246 Data Area 4 for Telemetry Log: Not Supported 00:13:25.246 Error Log Page Entries Supported: 1 00:13:25.246 Keep Alive: Not Supported 00:13:25.246 00:13:25.246 NVM Command Set Attributes 00:13:25.246 ========================== 00:13:25.246 Submission Queue Entry Size 00:13:25.246 Max: 1 00:13:25.246 Min: 1 00:13:25.246 Completion Queue Entry Size 00:13:25.246 Max: 1 00:13:25.246 Min: 1 00:13:25.246 Number of Namespaces: 0 00:13:25.246 Compare Command: Not Supported 00:13:25.246 Write Uncorrectable Command: Not Supported 00:13:25.246 Dataset Management Command: Not Supported 00:13:25.246 Write Zeroes Command: Not Supported 00:13:25.246 Set Features Save Field: Not Supported 00:13:25.246 Reservations: Not Supported 00:13:25.246 Timestamp: Not Supported 00:13:25.246 Copy: Not Supported 00:13:25.246 Volatile Write Cache: Not Present 00:13:25.246 Atomic Write Unit (Normal): 1 00:13:25.246 Atomic Write Unit (PFail): 1 00:13:25.246 Atomic Compare & Write Unit: 1 00:13:25.246 Fused Compare & Write: Not Supported 00:13:25.246 Scatter-Gather List 00:13:25.246 SGL Command Set: Supported 00:13:25.246 SGL Keyed: Not Supported 00:13:25.246 SGL Bit Bucket Descriptor: Not Supported 00:13:25.246 SGL Metadata Pointer: Not Supported 00:13:25.246 Oversized SGL: Not Supported 00:13:25.246 SGL Metadata Address: Not Supported 00:13:25.246 SGL Offset: Supported 00:13:25.246 Transport SGL Data Block: Not Supported 00:13:25.246 Replay Protected Memory Block: Not Supported 00:13:25.246 00:13:25.246 Firmware Slot Information 00:13:25.246 ========================= 00:13:25.246 Active slot: 0 00:13:25.246 00:13:25.246 00:13:25.246 Error Log 00:13:25.246 ========= 00:13:25.246 00:13:25.246 Active Namespaces 00:13:25.246 ================= 00:13:25.246 Discovery Log Page 00:13:25.246 ================== 00:13:25.246 Generation Counter: 2 00:13:25.246 Number of Records: 2 00:13:25.246 Record Format: 0 00:13:25.246 00:13:25.246 Discovery Log Entry 0 00:13:25.246 ---------------------- 00:13:25.246 Transport Type: 3 (TCP) 00:13:25.246 Address Family: 1 (IPv4) 00:13:25.246 Subsystem Type: 3 (Current Discovery Subsystem) 00:13:25.246 Entry Flags: 00:13:25.246 Duplicate Returned Information: 0 00:13:25.246 Explicit Persistent Connection Support for Discovery: 0 00:13:25.246 Transport Requirements: 00:13:25.246 Secure Channel: Not Specified 00:13:25.246 Port ID: 1 (0x0001) 00:13:25.246 Controller ID: 65535 (0xffff) 00:13:25.246 Admin Max SQ Size: 32 00:13:25.246 Transport Service Identifier: 4420 00:13:25.246 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:13:25.246 Transport Address: 10.0.0.1 00:13:25.246 Discovery Log Entry 1 00:13:25.246 ---------------------- 00:13:25.246 Transport Type: 3 (TCP) 00:13:25.246 Address Family: 1 (IPv4) 00:13:25.246 Subsystem Type: 2 (NVM Subsystem) 00:13:25.246 Entry Flags: 00:13:25.246 Duplicate Returned Information: 0 00:13:25.246 Explicit Persistent Connection Support for Discovery: 0 00:13:25.246 Transport Requirements: 00:13:25.246 Secure Channel: Not Specified 00:13:25.246 Port ID: 1 (0x0001) 00:13:25.246 Controller ID: 65535 (0xffff) 00:13:25.246 Admin Max SQ Size: 32 00:13:25.246 Transport Service Identifier: 4420 00:13:25.246 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:13:25.246 Transport Address: 10.0.0.1 00:13:25.246 15:36:26 -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:13:25.505 get_feature(0x01) failed 00:13:25.505 get_feature(0x02) failed 00:13:25.505 get_feature(0x04) failed 00:13:25.505 ===================================================== 00:13:25.505 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:13:25.505 ===================================================== 00:13:25.505 Controller Capabilities/Features 00:13:25.505 ================================ 00:13:25.505 Vendor ID: 0000 00:13:25.505 Subsystem Vendor ID: 0000 00:13:25.505 Serial Number: aaae55ebee7f9cc8fa0f 00:13:25.505 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:13:25.505 Firmware Version: 6.7.0-68 00:13:25.505 Recommended Arb Burst: 6 00:13:25.505 IEEE OUI Identifier: 00 00 00 00:13:25.505 Multi-path I/O 00:13:25.505 May have multiple subsystem ports: Yes 00:13:25.505 May have multiple controllers: Yes 00:13:25.505 Associated with SR-IOV VF: No 00:13:25.505 Max Data Transfer Size: Unlimited 00:13:25.505 Max Number of Namespaces: 1024 00:13:25.505 Max Number of I/O Queues: 128 00:13:25.505 NVMe Specification Version (VS): 1.3 00:13:25.505 NVMe Specification Version (Identify): 1.3 00:13:25.505 Maximum Queue Entries: 1024 00:13:25.505 Contiguous Queues Required: No 00:13:25.505 Arbitration Mechanisms Supported 00:13:25.505 Weighted Round Robin: Not Supported 00:13:25.505 Vendor Specific: Not Supported 00:13:25.505 Reset Timeout: 7500 ms 00:13:25.505 Doorbell Stride: 4 bytes 00:13:25.505 NVM Subsystem Reset: Not Supported 00:13:25.505 Command Sets Supported 00:13:25.505 NVM Command Set: Supported 00:13:25.505 Boot Partition: Not Supported 00:13:25.505 Memory Page Size Minimum: 4096 bytes 00:13:25.505 Memory Page Size Maximum: 4096 bytes 00:13:25.505 Persistent Memory Region: Not Supported 00:13:25.505 Optional Asynchronous Events Supported 00:13:25.505 Namespace Attribute Notices: Supported 00:13:25.505 Firmware Activation Notices: Not Supported 00:13:25.505 ANA Change Notices: Supported 00:13:25.505 PLE Aggregate Log Change Notices: Not Supported 00:13:25.505 LBA Status Info Alert Notices: Not Supported 00:13:25.505 EGE Aggregate Log Change Notices: Not Supported 00:13:25.505 Normal NVM Subsystem Shutdown event: Not Supported 00:13:25.505 Zone Descriptor Change Notices: Not Supported 00:13:25.505 Discovery Log Change Notices: Not Supported 00:13:25.505 Controller Attributes 00:13:25.505 128-bit Host Identifier: Supported 00:13:25.505 Non-Operational Permissive Mode: Not Supported 00:13:25.505 NVM Sets: Not Supported 00:13:25.505 Read Recovery Levels: Not Supported 00:13:25.505 Endurance Groups: Not Supported 00:13:25.505 Predictable Latency Mode: Not Supported 00:13:25.505 Traffic Based Keep ALive: Supported 00:13:25.505 Namespace Granularity: Not Supported 00:13:25.505 SQ Associations: Not Supported 00:13:25.505 UUID List: Not Supported 00:13:25.505 Multi-Domain Subsystem: Not Supported 00:13:25.505 Fixed Capacity Management: Not Supported 00:13:25.505 Variable Capacity Management: Not Supported 00:13:25.505 Delete Endurance Group: Not Supported 00:13:25.505 Delete NVM Set: Not Supported 00:13:25.505 Extended LBA Formats Supported: Not Supported 00:13:25.505 Flexible Data Placement Supported: Not Supported 00:13:25.505 00:13:25.505 Controller Memory Buffer Support 00:13:25.505 ================================ 00:13:25.505 Supported: No 00:13:25.505 00:13:25.505 Persistent Memory Region Support 00:13:25.505 ================================ 00:13:25.505 Supported: No 00:13:25.505 00:13:25.505 Admin Command Set Attributes 00:13:25.505 ============================ 00:13:25.505 Security Send/Receive: Not Supported 00:13:25.505 Format NVM: Not Supported 00:13:25.505 Firmware Activate/Download: Not Supported 00:13:25.505 Namespace Management: Not Supported 00:13:25.505 Device Self-Test: Not Supported 00:13:25.505 Directives: Not Supported 00:13:25.505 NVMe-MI: Not Supported 00:13:25.505 Virtualization Management: Not Supported 00:13:25.505 Doorbell Buffer Config: Not Supported 00:13:25.505 Get LBA Status Capability: Not Supported 00:13:25.505 Command & Feature Lockdown Capability: Not Supported 00:13:25.505 Abort Command Limit: 4 00:13:25.505 Async Event Request Limit: 4 00:13:25.505 Number of Firmware Slots: N/A 00:13:25.505 Firmware Slot 1 Read-Only: N/A 00:13:25.505 Firmware Activation Without Reset: N/A 00:13:25.505 Multiple Update Detection Support: N/A 00:13:25.505 Firmware Update Granularity: No Information Provided 00:13:25.505 Per-Namespace SMART Log: Yes 00:13:25.505 Asymmetric Namespace Access Log Page: Supported 00:13:25.505 ANA Transition Time : 10 sec 00:13:25.505 00:13:25.505 Asymmetric Namespace Access Capabilities 00:13:25.505 ANA Optimized State : Supported 00:13:25.505 ANA Non-Optimized State : Supported 00:13:25.505 ANA Inaccessible State : Supported 00:13:25.505 ANA Persistent Loss State : Supported 00:13:25.505 ANA Change State : Supported 00:13:25.505 ANAGRPID is not changed : No 00:13:25.505 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:13:25.505 00:13:25.505 ANA Group Identifier Maximum : 128 00:13:25.505 Number of ANA Group Identifiers : 128 00:13:25.505 Max Number of Allowed Namespaces : 1024 00:13:25.505 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:13:25.505 Command Effects Log Page: Supported 00:13:25.505 Get Log Page Extended Data: Supported 00:13:25.505 Telemetry Log Pages: Not Supported 00:13:25.505 Persistent Event Log Pages: Not Supported 00:13:25.505 Supported Log Pages Log Page: May Support 00:13:25.505 Commands Supported & Effects Log Page: Not Supported 00:13:25.505 Feature Identifiers & Effects Log Page:May Support 00:13:25.505 NVMe-MI Commands & Effects Log Page: May Support 00:13:25.505 Data Area 4 for Telemetry Log: Not Supported 00:13:25.505 Error Log Page Entries Supported: 128 00:13:25.505 Keep Alive: Supported 00:13:25.505 Keep Alive Granularity: 1000 ms 00:13:25.505 00:13:25.505 NVM Command Set Attributes 00:13:25.505 ========================== 00:13:25.505 Submission Queue Entry Size 00:13:25.505 Max: 64 00:13:25.505 Min: 64 00:13:25.505 Completion Queue Entry Size 00:13:25.505 Max: 16 00:13:25.505 Min: 16 00:13:25.505 Number of Namespaces: 1024 00:13:25.505 Compare Command: Not Supported 00:13:25.506 Write Uncorrectable Command: Not Supported 00:13:25.506 Dataset Management Command: Supported 00:13:25.506 Write Zeroes Command: Supported 00:13:25.506 Set Features Save Field: Not Supported 00:13:25.506 Reservations: Not Supported 00:13:25.506 Timestamp: Not Supported 00:13:25.506 Copy: Not Supported 00:13:25.506 Volatile Write Cache: Present 00:13:25.506 Atomic Write Unit (Normal): 1 00:13:25.506 Atomic Write Unit (PFail): 1 00:13:25.506 Atomic Compare & Write Unit: 1 00:13:25.506 Fused Compare & Write: Not Supported 00:13:25.506 Scatter-Gather List 00:13:25.506 SGL Command Set: Supported 00:13:25.506 SGL Keyed: Not Supported 00:13:25.506 SGL Bit Bucket Descriptor: Not Supported 00:13:25.506 SGL Metadata Pointer: Not Supported 00:13:25.506 Oversized SGL: Not Supported 00:13:25.506 SGL Metadata Address: Not Supported 00:13:25.506 SGL Offset: Supported 00:13:25.506 Transport SGL Data Block: Not Supported 00:13:25.506 Replay Protected Memory Block: Not Supported 00:13:25.506 00:13:25.506 Firmware Slot Information 00:13:25.506 ========================= 00:13:25.506 Active slot: 0 00:13:25.506 00:13:25.506 Asymmetric Namespace Access 00:13:25.506 =========================== 00:13:25.506 Change Count : 0 00:13:25.506 Number of ANA Group Descriptors : 1 00:13:25.506 ANA Group Descriptor : 0 00:13:25.506 ANA Group ID : 1 00:13:25.506 Number of NSID Values : 1 00:13:25.506 Change Count : 0 00:13:25.506 ANA State : 1 00:13:25.506 Namespace Identifier : 1 00:13:25.506 00:13:25.506 Commands Supported and Effects 00:13:25.506 ============================== 00:13:25.506 Admin Commands 00:13:25.506 -------------- 00:13:25.506 Get Log Page (02h): Supported 00:13:25.506 Identify (06h): Supported 00:13:25.506 Abort (08h): Supported 00:13:25.506 Set Features (09h): Supported 00:13:25.506 Get Features (0Ah): Supported 00:13:25.506 Asynchronous Event Request (0Ch): Supported 00:13:25.506 Keep Alive (18h): Supported 00:13:25.506 I/O Commands 00:13:25.506 ------------ 00:13:25.506 Flush (00h): Supported 00:13:25.506 Write (01h): Supported LBA-Change 00:13:25.506 Read (02h): Supported 00:13:25.506 Write Zeroes (08h): Supported LBA-Change 00:13:25.506 Dataset Management (09h): Supported 00:13:25.506 00:13:25.506 Error Log 00:13:25.506 ========= 00:13:25.506 Entry: 0 00:13:25.506 Error Count: 0x3 00:13:25.506 Submission Queue Id: 0x0 00:13:25.506 Command Id: 0x5 00:13:25.506 Phase Bit: 0 00:13:25.506 Status Code: 0x2 00:13:25.506 Status Code Type: 0x0 00:13:25.506 Do Not Retry: 1 00:13:25.506 Error Location: 0x28 00:13:25.506 LBA: 0x0 00:13:25.506 Namespace: 0x0 00:13:25.506 Vendor Log Page: 0x0 00:13:25.506 ----------- 00:13:25.506 Entry: 1 00:13:25.506 Error Count: 0x2 00:13:25.506 Submission Queue Id: 0x0 00:13:25.506 Command Id: 0x5 00:13:25.506 Phase Bit: 0 00:13:25.506 Status Code: 0x2 00:13:25.506 Status Code Type: 0x0 00:13:25.506 Do Not Retry: 1 00:13:25.506 Error Location: 0x28 00:13:25.506 LBA: 0x0 00:13:25.506 Namespace: 0x0 00:13:25.506 Vendor Log Page: 0x0 00:13:25.506 ----------- 00:13:25.506 Entry: 2 00:13:25.506 Error Count: 0x1 00:13:25.506 Submission Queue Id: 0x0 00:13:25.506 Command Id: 0x4 00:13:25.506 Phase Bit: 0 00:13:25.506 Status Code: 0x2 00:13:25.506 Status Code Type: 0x0 00:13:25.506 Do Not Retry: 1 00:13:25.506 Error Location: 0x28 00:13:25.506 LBA: 0x0 00:13:25.506 Namespace: 0x0 00:13:25.506 Vendor Log Page: 0x0 00:13:25.506 00:13:25.506 Number of Queues 00:13:25.506 ================ 00:13:25.506 Number of I/O Submission Queues: 128 00:13:25.506 Number of I/O Completion Queues: 128 00:13:25.506 00:13:25.506 ZNS Specific Controller Data 00:13:25.506 ============================ 00:13:25.506 Zone Append Size Limit: 0 00:13:25.506 00:13:25.506 00:13:25.506 Active Namespaces 00:13:25.506 ================= 00:13:25.506 get_feature(0x05) failed 00:13:25.506 Namespace ID:1 00:13:25.506 Command Set Identifier: NVM (00h) 00:13:25.506 Deallocate: Supported 00:13:25.506 Deallocated/Unwritten Error: Not Supported 00:13:25.506 Deallocated Read Value: Unknown 00:13:25.506 Deallocate in Write Zeroes: Not Supported 00:13:25.506 Deallocated Guard Field: 0xFFFF 00:13:25.506 Flush: Supported 00:13:25.506 Reservation: Not Supported 00:13:25.506 Namespace Sharing Capabilities: Multiple Controllers 00:13:25.506 Size (in LBAs): 1310720 (5GiB) 00:13:25.506 Capacity (in LBAs): 1310720 (5GiB) 00:13:25.506 Utilization (in LBAs): 1310720 (5GiB) 00:13:25.506 UUID: d1c9a495-5409-460d-a426-498ba28e96e0 00:13:25.506 Thin Provisioning: Not Supported 00:13:25.506 Per-NS Atomic Units: Yes 00:13:25.506 Atomic Boundary Size (Normal): 0 00:13:25.506 Atomic Boundary Size (PFail): 0 00:13:25.506 Atomic Boundary Offset: 0 00:13:25.506 NGUID/EUI64 Never Reused: No 00:13:25.506 ANA group ID: 1 00:13:25.506 Namespace Write Protected: No 00:13:25.506 Number of LBA Formats: 1 00:13:25.506 Current LBA Format: LBA Format #00 00:13:25.506 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:13:25.506 00:13:25.506 15:36:26 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:13:25.506 15:36:26 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:25.506 15:36:26 -- nvmf/common.sh@117 -- # sync 00:13:25.506 15:36:26 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:25.506 15:36:26 -- nvmf/common.sh@120 -- # set +e 00:13:25.506 15:36:26 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:25.506 15:36:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:25.506 rmmod nvme_tcp 00:13:25.506 rmmod nvme_fabrics 00:13:25.506 15:36:26 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:25.506 15:36:26 -- nvmf/common.sh@124 -- # set -e 00:13:25.506 15:36:26 -- nvmf/common.sh@125 -- # return 0 00:13:25.506 15:36:26 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:13:25.506 15:36:26 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:25.506 15:36:26 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:25.506 15:36:26 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:25.506 15:36:26 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:25.506 15:36:26 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:25.506 15:36:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:25.506 15:36:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:25.506 15:36:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.506 15:36:26 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:25.506 15:36:26 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:13:25.506 15:36:26 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:13:25.506 15:36:26 -- nvmf/common.sh@675 -- # echo 0 00:13:25.506 15:36:26 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:13:25.506 15:36:26 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:13:25.506 15:36:26 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:13:25.506 15:36:26 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:13:25.506 15:36:26 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:13:25.506 15:36:26 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:13:25.506 15:36:26 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:26.439 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:26.439 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:26.439 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:26.439 ************************************ 00:13:26.439 END TEST nvmf_identify_kernel_target 00:13:26.439 ************************************ 00:13:26.439 00:13:26.439 real 0m2.858s 00:13:26.439 user 0m0.975s 00:13:26.439 sys 0m1.401s 00:13:26.439 15:36:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:26.439 15:36:27 -- common/autotest_common.sh@10 -- # set +x 00:13:26.439 15:36:27 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:13:26.439 15:36:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:26.439 15:36:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:26.439 15:36:27 -- common/autotest_common.sh@10 -- # set +x 00:13:26.697 ************************************ 00:13:26.697 START TEST nvmf_auth 00:13:26.697 ************************************ 00:13:26.697 15:36:27 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:13:26.697 * Looking for test storage... 00:13:26.697 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:13:26.697 15:36:27 -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:26.697 15:36:27 -- nvmf/common.sh@7 -- # uname -s 00:13:26.697 15:36:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:26.697 15:36:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:26.697 15:36:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:26.697 15:36:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:26.697 15:36:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:26.697 15:36:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:26.697 15:36:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:26.697 15:36:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:26.697 15:36:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:26.697 15:36:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:26.697 15:36:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02dfa913-00e4-4a25-ab2c-855f7283d4db 00:13:26.697 15:36:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=02dfa913-00e4-4a25-ab2c-855f7283d4db 00:13:26.697 15:36:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:26.697 15:36:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:26.697 15:36:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:26.697 15:36:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:26.697 15:36:28 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:26.697 15:36:28 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:26.697 15:36:28 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:26.697 15:36:28 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:26.697 15:36:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.697 15:36:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.697 15:36:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.697 15:36:28 -- paths/export.sh@5 -- # export PATH 00:13:26.697 15:36:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.697 15:36:28 -- nvmf/common.sh@47 -- # : 0 00:13:26.697 15:36:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:26.697 15:36:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:26.697 15:36:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:26.697 15:36:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:26.697 15:36:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:26.697 15:36:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:26.697 15:36:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:26.697 15:36:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:26.697 15:36:28 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:13:26.697 15:36:28 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:13:26.697 15:36:28 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:13:26.697 15:36:28 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:13:26.697 15:36:28 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:13:26.697 15:36:28 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:13:26.697 15:36:28 -- host/auth.sh@21 -- # keys=() 00:13:26.697 15:36:28 -- host/auth.sh@77 -- # nvmftestinit 00:13:26.697 15:36:28 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:26.697 15:36:28 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:26.697 15:36:28 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:26.697 15:36:28 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:26.697 15:36:28 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:26.697 15:36:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.697 15:36:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:26.697 15:36:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.697 15:36:28 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:13:26.697 15:36:28 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:13:26.697 15:36:28 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:13:26.697 15:36:28 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:13:26.697 15:36:28 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:13:26.697 15:36:28 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:13:26.697 15:36:28 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:26.697 15:36:28 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:26.697 15:36:28 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:26.697 15:36:28 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:26.697 15:36:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:26.697 15:36:28 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:26.697 15:36:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:26.697 15:36:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:26.697 15:36:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:26.697 15:36:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:26.697 15:36:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:26.697 15:36:28 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:26.697 15:36:28 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:26.697 15:36:28 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:26.697 Cannot find device "nvmf_tgt_br" 00:13:26.697 15:36:28 -- nvmf/common.sh@155 -- # true 00:13:26.697 15:36:28 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:26.697 Cannot find device "nvmf_tgt_br2" 00:13:26.697 15:36:28 -- nvmf/common.sh@156 -- # true 00:13:26.697 15:36:28 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:26.697 15:36:28 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:26.697 Cannot find device "nvmf_tgt_br" 00:13:26.697 15:36:28 -- nvmf/common.sh@158 -- # true 00:13:26.697 15:36:28 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:26.697 Cannot find device "nvmf_tgt_br2" 00:13:26.697 15:36:28 -- nvmf/common.sh@159 -- # true 00:13:26.697 15:36:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:27.012 15:36:28 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:27.012 15:36:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:27.012 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:27.012 15:36:28 -- nvmf/common.sh@162 -- # true 00:13:27.012 15:36:28 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:27.012 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:27.012 15:36:28 -- nvmf/common.sh@163 -- # true 00:13:27.012 15:36:28 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:27.012 15:36:28 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:27.012 15:36:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:27.012 15:36:28 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:27.012 15:36:28 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:27.012 15:36:28 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:27.012 15:36:28 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:27.012 15:36:28 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:27.012 15:36:28 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:27.012 15:36:28 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:27.012 15:36:28 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:27.012 15:36:28 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:27.012 15:36:28 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:27.012 15:36:28 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:27.012 15:36:28 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:27.012 15:36:28 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:27.012 15:36:28 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:27.012 15:36:28 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:27.012 15:36:28 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:27.012 15:36:28 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:27.012 15:36:28 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:27.012 15:36:28 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:27.013 15:36:28 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:27.013 15:36:28 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:27.013 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:27.013 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:13:27.013 00:13:27.013 --- 10.0.0.2 ping statistics --- 00:13:27.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.013 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:13:27.013 15:36:28 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:27.013 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:27.013 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:13:27.013 00:13:27.013 --- 10.0.0.3 ping statistics --- 00:13:27.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.013 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:13:27.013 15:36:28 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:27.013 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:27.013 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:13:27.013 00:13:27.013 --- 10.0.0.1 ping statistics --- 00:13:27.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.013 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:13:27.013 15:36:28 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:27.013 15:36:28 -- nvmf/common.sh@422 -- # return 0 00:13:27.013 15:36:28 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:27.013 15:36:28 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:27.013 15:36:28 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:27.013 15:36:28 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:27.013 15:36:28 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:27.013 15:36:28 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:27.013 15:36:28 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:27.013 15:36:28 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:13:27.013 15:36:28 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:27.013 15:36:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:27.013 15:36:28 -- common/autotest_common.sh@10 -- # set +x 00:13:27.013 15:36:28 -- nvmf/common.sh@470 -- # nvmfpid=74644 00:13:27.013 15:36:28 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:13:27.013 15:36:28 -- nvmf/common.sh@471 -- # waitforlisten 74644 00:13:27.013 15:36:28 -- common/autotest_common.sh@817 -- # '[' -z 74644 ']' 00:13:27.013 15:36:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.013 15:36:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:27.013 15:36:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.013 15:36:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:27.013 15:36:28 -- common/autotest_common.sh@10 -- # set +x 00:13:27.947 15:36:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:27.947 15:36:29 -- common/autotest_common.sh@850 -- # return 0 00:13:28.206 15:36:29 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:28.206 15:36:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:28.206 15:36:29 -- common/autotest_common.sh@10 -- # set +x 00:13:28.206 15:36:29 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:28.206 15:36:29 -- host/auth.sh@79 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:13:28.206 15:36:29 -- host/auth.sh@81 -- # gen_key null 32 00:13:28.206 15:36:29 -- host/auth.sh@53 -- # local digest len file key 00:13:28.206 15:36:29 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:28.206 15:36:29 -- host/auth.sh@54 -- # local -A digests 00:13:28.206 15:36:29 -- host/auth.sh@56 -- # digest=null 00:13:28.206 15:36:29 -- host/auth.sh@56 -- # len=32 00:13:28.206 15:36:29 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:28.206 15:36:29 -- host/auth.sh@57 -- # key=74bdf8758cc7992aa1e7845e36e00b16 00:13:28.206 15:36:29 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:13:28.206 15:36:29 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.TaT 00:13:28.206 15:36:29 -- host/auth.sh@59 -- # format_dhchap_key 74bdf8758cc7992aa1e7845e36e00b16 0 00:13:28.206 15:36:29 -- nvmf/common.sh@708 -- # format_key DHHC-1 74bdf8758cc7992aa1e7845e36e00b16 0 00:13:28.206 15:36:29 -- nvmf/common.sh@691 -- # local prefix key digest 00:13:28.206 15:36:29 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:13:28.206 15:36:29 -- nvmf/common.sh@693 -- # key=74bdf8758cc7992aa1e7845e36e00b16 00:13:28.206 15:36:29 -- nvmf/common.sh@693 -- # digest=0 00:13:28.206 15:36:29 -- nvmf/common.sh@694 -- # python - 00:13:28.206 15:36:29 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.TaT 00:13:28.206 15:36:29 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.TaT 00:13:28.206 15:36:29 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.TaT 00:13:28.206 15:36:29 -- host/auth.sh@82 -- # gen_key null 48 00:13:28.206 15:36:29 -- host/auth.sh@53 -- # local digest len file key 00:13:28.206 15:36:29 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:28.206 15:36:29 -- host/auth.sh@54 -- # local -A digests 00:13:28.206 15:36:29 -- host/auth.sh@56 -- # digest=null 00:13:28.206 15:36:29 -- host/auth.sh@56 -- # len=48 00:13:28.206 15:36:29 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:28.206 15:36:29 -- host/auth.sh@57 -- # key=29bf44e0f7d1a877ed370c9452bf7078d363711a918b9d75 00:13:28.206 15:36:29 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:13:28.206 15:36:29 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.wlJ 00:13:28.206 15:36:29 -- host/auth.sh@59 -- # format_dhchap_key 29bf44e0f7d1a877ed370c9452bf7078d363711a918b9d75 0 00:13:28.206 15:36:29 -- nvmf/common.sh@708 -- # format_key DHHC-1 29bf44e0f7d1a877ed370c9452bf7078d363711a918b9d75 0 00:13:28.206 15:36:29 -- nvmf/common.sh@691 -- # local prefix key digest 00:13:28.206 15:36:29 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:13:28.206 15:36:29 -- nvmf/common.sh@693 -- # key=29bf44e0f7d1a877ed370c9452bf7078d363711a918b9d75 00:13:28.206 15:36:29 -- nvmf/common.sh@693 -- # digest=0 00:13:28.206 15:36:29 -- nvmf/common.sh@694 -- # python - 00:13:28.206 15:36:29 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.wlJ 00:13:28.206 15:36:29 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.wlJ 00:13:28.206 15:36:29 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.wlJ 00:13:28.206 15:36:29 -- host/auth.sh@83 -- # gen_key sha256 32 00:13:28.206 15:36:29 -- host/auth.sh@53 -- # local digest len file key 00:13:28.206 15:36:29 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:28.206 15:36:29 -- host/auth.sh@54 -- # local -A digests 00:13:28.206 15:36:29 -- host/auth.sh@56 -- # digest=sha256 00:13:28.206 15:36:29 -- host/auth.sh@56 -- # len=32 00:13:28.206 15:36:29 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:28.206 15:36:29 -- host/auth.sh@57 -- # key=a58520f01ffbaef5cd351b39af2d9cef 00:13:28.206 15:36:29 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:13:28.206 15:36:29 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.Wi4 00:13:28.206 15:36:29 -- host/auth.sh@59 -- # format_dhchap_key a58520f01ffbaef5cd351b39af2d9cef 1 00:13:28.206 15:36:29 -- nvmf/common.sh@708 -- # format_key DHHC-1 a58520f01ffbaef5cd351b39af2d9cef 1 00:13:28.206 15:36:29 -- nvmf/common.sh@691 -- # local prefix key digest 00:13:28.206 15:36:29 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:13:28.206 15:36:29 -- nvmf/common.sh@693 -- # key=a58520f01ffbaef5cd351b39af2d9cef 00:13:28.206 15:36:29 -- nvmf/common.sh@693 -- # digest=1 00:13:28.206 15:36:29 -- nvmf/common.sh@694 -- # python - 00:13:28.206 15:36:29 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.Wi4 00:13:28.206 15:36:29 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.Wi4 00:13:28.206 15:36:29 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.Wi4 00:13:28.206 15:36:29 -- host/auth.sh@84 -- # gen_key sha384 48 00:13:28.206 15:36:29 -- host/auth.sh@53 -- # local digest len file key 00:13:28.206 15:36:29 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:28.206 15:36:29 -- host/auth.sh@54 -- # local -A digests 00:13:28.207 15:36:29 -- host/auth.sh@56 -- # digest=sha384 00:13:28.207 15:36:29 -- host/auth.sh@56 -- # len=48 00:13:28.207 15:36:29 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:28.207 15:36:29 -- host/auth.sh@57 -- # key=48ae04badcde149647fd2dcc45061a011fcc49a0c708d40c 00:13:28.207 15:36:29 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:13:28.207 15:36:29 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.suH 00:13:28.207 15:36:29 -- host/auth.sh@59 -- # format_dhchap_key 48ae04badcde149647fd2dcc45061a011fcc49a0c708d40c 2 00:13:28.207 15:36:29 -- nvmf/common.sh@708 -- # format_key DHHC-1 48ae04badcde149647fd2dcc45061a011fcc49a0c708d40c 2 00:13:28.207 15:36:29 -- nvmf/common.sh@691 -- # local prefix key digest 00:13:28.207 15:36:29 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:13:28.207 15:36:29 -- nvmf/common.sh@693 -- # key=48ae04badcde149647fd2dcc45061a011fcc49a0c708d40c 00:13:28.207 15:36:29 -- nvmf/common.sh@693 -- # digest=2 00:13:28.207 15:36:29 -- nvmf/common.sh@694 -- # python - 00:13:28.465 15:36:29 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.suH 00:13:28.465 15:36:29 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.suH 00:13:28.465 15:36:29 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.suH 00:13:28.465 15:36:29 -- host/auth.sh@85 -- # gen_key sha512 64 00:13:28.465 15:36:29 -- host/auth.sh@53 -- # local digest len file key 00:13:28.465 15:36:29 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:28.465 15:36:29 -- host/auth.sh@54 -- # local -A digests 00:13:28.465 15:36:29 -- host/auth.sh@56 -- # digest=sha512 00:13:28.465 15:36:29 -- host/auth.sh@56 -- # len=64 00:13:28.465 15:36:29 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:28.465 15:36:29 -- host/auth.sh@57 -- # key=3eb634c6fc76f7c08af894057cd9f3eae08b7325069ecd9ee960ae62d9922491 00:13:28.465 15:36:29 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:13:28.465 15:36:29 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.CVX 00:13:28.465 15:36:29 -- host/auth.sh@59 -- # format_dhchap_key 3eb634c6fc76f7c08af894057cd9f3eae08b7325069ecd9ee960ae62d9922491 3 00:13:28.465 15:36:29 -- nvmf/common.sh@708 -- # format_key DHHC-1 3eb634c6fc76f7c08af894057cd9f3eae08b7325069ecd9ee960ae62d9922491 3 00:13:28.465 15:36:29 -- nvmf/common.sh@691 -- # local prefix key digest 00:13:28.465 15:36:29 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:13:28.465 15:36:29 -- nvmf/common.sh@693 -- # key=3eb634c6fc76f7c08af894057cd9f3eae08b7325069ecd9ee960ae62d9922491 00:13:28.465 15:36:29 -- nvmf/common.sh@693 -- # digest=3 00:13:28.465 15:36:29 -- nvmf/common.sh@694 -- # python - 00:13:28.465 15:36:29 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.CVX 00:13:28.465 15:36:29 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.CVX 00:13:28.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.465 15:36:29 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.CVX 00:13:28.465 15:36:29 -- host/auth.sh@87 -- # waitforlisten 74644 00:13:28.465 15:36:29 -- common/autotest_common.sh@817 -- # '[' -z 74644 ']' 00:13:28.465 15:36:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.465 15:36:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:28.465 15:36:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.465 15:36:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:28.465 15:36:29 -- common/autotest_common.sh@10 -- # set +x 00:13:28.723 15:36:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:28.723 15:36:30 -- common/autotest_common.sh@850 -- # return 0 00:13:28.723 15:36:30 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:13:28.723 15:36:30 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.TaT 00:13:28.723 15:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:28.723 15:36:30 -- common/autotest_common.sh@10 -- # set +x 00:13:28.723 15:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:28.723 15:36:30 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:13:28.724 15:36:30 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.wlJ 00:13:28.724 15:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:28.724 15:36:30 -- common/autotest_common.sh@10 -- # set +x 00:13:28.724 15:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:28.724 15:36:30 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:13:28.724 15:36:30 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Wi4 00:13:28.724 15:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:28.724 15:36:30 -- common/autotest_common.sh@10 -- # set +x 00:13:28.724 15:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:28.724 15:36:30 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:13:28.724 15:36:30 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.suH 00:13:28.724 15:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:28.724 15:36:30 -- common/autotest_common.sh@10 -- # set +x 00:13:28.724 15:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:28.724 15:36:30 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:13:28.724 15:36:30 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.CVX 00:13:28.724 15:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:28.724 15:36:30 -- common/autotest_common.sh@10 -- # set +x 00:13:28.724 15:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:28.724 15:36:30 -- host/auth.sh@92 -- # nvmet_auth_init 00:13:28.724 15:36:30 -- host/auth.sh@35 -- # get_main_ns_ip 00:13:28.724 15:36:30 -- nvmf/common.sh@717 -- # local ip 00:13:28.724 15:36:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:28.724 15:36:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:28.724 15:36:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:28.724 15:36:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:28.724 15:36:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:28.724 15:36:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:28.724 15:36:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:28.724 15:36:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:28.724 15:36:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:28.724 15:36:30 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:13:28.724 15:36:30 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:13:28.724 15:36:30 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:13:28.724 15:36:30 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:13:28.724 15:36:30 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:13:28.724 15:36:30 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:13:28.724 15:36:30 -- nvmf/common.sh@628 -- # local block nvme 00:13:28.724 15:36:30 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:13:28.724 15:36:30 -- nvmf/common.sh@631 -- # modprobe nvmet 00:13:28.724 15:36:30 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:13:28.724 15:36:30 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:28.982 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:29.239 Waiting for block devices as requested 00:13:29.239 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:29.239 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:29.806 15:36:31 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:13:29.806 15:36:31 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:29.806 15:36:31 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:13:29.806 15:36:31 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:13:29.806 15:36:31 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:29.806 15:36:31 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:29.806 15:36:31 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:13:29.806 15:36:31 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:13:29.806 15:36:31 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:13:29.806 No valid GPT data, bailing 00:13:29.806 15:36:31 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:13:29.806 15:36:31 -- scripts/common.sh@391 -- # pt= 00:13:29.806 15:36:31 -- scripts/common.sh@392 -- # return 1 00:13:29.806 15:36:31 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:13:29.806 15:36:31 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:13:29.806 15:36:31 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:13:29.806 15:36:31 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:13:29.806 15:36:31 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:13:29.806 15:36:31 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:13:29.806 15:36:31 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:29.806 15:36:31 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:13:29.806 15:36:31 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:13:29.806 15:36:31 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:13:29.806 No valid GPT data, bailing 00:13:29.806 15:36:31 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:13:29.806 15:36:31 -- scripts/common.sh@391 -- # pt= 00:13:29.806 15:36:31 -- scripts/common.sh@392 -- # return 1 00:13:29.806 15:36:31 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:13:29.806 15:36:31 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:13:29.806 15:36:31 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:13:29.806 15:36:31 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:13:29.806 15:36:31 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:13:29.806 15:36:31 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:13:29.806 15:36:31 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:29.806 15:36:31 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:13:29.806 15:36:31 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:13:29.806 15:36:31 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:13:30.065 No valid GPT data, bailing 00:13:30.065 15:36:31 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:13:30.065 15:36:31 -- scripts/common.sh@391 -- # pt= 00:13:30.065 15:36:31 -- scripts/common.sh@392 -- # return 1 00:13:30.065 15:36:31 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:13:30.065 15:36:31 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:13:30.065 15:36:31 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:13:30.065 15:36:31 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:13:30.065 15:36:31 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:13:30.065 15:36:31 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:30.065 15:36:31 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:30.065 15:36:31 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:13:30.065 15:36:31 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:13:30.065 15:36:31 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:13:30.065 No valid GPT data, bailing 00:13:30.065 15:36:31 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:13:30.065 15:36:31 -- scripts/common.sh@391 -- # pt= 00:13:30.065 15:36:31 -- scripts/common.sh@392 -- # return 1 00:13:30.065 15:36:31 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:13:30.065 15:36:31 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:13:30.065 15:36:31 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:13:30.065 15:36:31 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:13:30.065 15:36:31 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:13:30.065 15:36:31 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:13:30.065 15:36:31 -- nvmf/common.sh@656 -- # echo 1 00:13:30.065 15:36:31 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:13:30.065 15:36:31 -- nvmf/common.sh@658 -- # echo 1 00:13:30.065 15:36:31 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:13:30.065 15:36:31 -- nvmf/common.sh@661 -- # echo tcp 00:13:30.065 15:36:31 -- nvmf/common.sh@662 -- # echo 4420 00:13:30.065 15:36:31 -- nvmf/common.sh@663 -- # echo ipv4 00:13:30.065 15:36:31 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:13:30.065 15:36:31 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02dfa913-00e4-4a25-ab2c-855f7283d4db --hostid=02dfa913-00e4-4a25-ab2c-855f7283d4db -a 10.0.0.1 -t tcp -s 4420 00:13:30.065 00:13:30.065 Discovery Log Number of Records 2, Generation counter 2 00:13:30.065 =====Discovery Log Entry 0====== 00:13:30.065 trtype: tcp 00:13:30.065 adrfam: ipv4 00:13:30.065 subtype: current discovery subsystem 00:13:30.065 treq: not specified, sq flow control disable supported 00:13:30.065 portid: 1 00:13:30.065 trsvcid: 4420 00:13:30.065 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:30.065 traddr: 10.0.0.1 00:13:30.065 eflags: none 00:13:30.065 sectype: none 00:13:30.065 =====Discovery Log Entry 1====== 00:13:30.065 trtype: tcp 00:13:30.065 adrfam: ipv4 00:13:30.065 subtype: nvme subsystem 00:13:30.065 treq: not specified, sq flow control disable supported 00:13:30.065 portid: 1 00:13:30.065 trsvcid: 4420 00:13:30.065 subnqn: nqn.2024-02.io.spdk:cnode0 00:13:30.065 traddr: 10.0.0.1 00:13:30.065 eflags: none 00:13:30.065 sectype: none 00:13:30.065 15:36:31 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:13:30.065 15:36:31 -- host/auth.sh@37 -- # echo 0 00:13:30.065 15:36:31 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:13:30.065 15:36:31 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:13:30.065 15:36:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:30.065 15:36:31 -- host/auth.sh@44 -- # digest=sha256 00:13:30.065 15:36:31 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:13:30.065 15:36:31 -- host/auth.sh@44 -- # keyid=1 00:13:30.065 15:36:31 -- host/auth.sh@45 -- # key=DHHC-1:00:MjliZjQ0ZTBmN2QxYTg3N2VkMzcwYzk0NTJiZjcwNzhkMzYzNzExYTkxOGI5ZDc1TVXigg==: 00:13:30.065 15:36:31 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:13:30.065 15:36:31 -- host/auth.sh@48 -- # echo ffdhe2048 00:13:30.323 15:36:31 -- host/auth.sh@49 -- # echo DHHC-1:00:MjliZjQ0ZTBmN2QxYTg3N2VkMzcwYzk0NTJiZjcwNzhkMzYzNzExYTkxOGI5ZDc1TVXigg==: 00:13:30.323 15:36:31 -- host/auth.sh@100 -- # IFS=, 00:13:30.323 15:36:31 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:13:30.323 15:36:31 -- host/auth.sh@100 -- # IFS=, 00:13:30.323 15:36:31 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:30.323 15:36:31 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:13:30.323 15:36:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:30.323 15:36:31 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:13:30.323 15:36:31 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:30.323 15:36:31 -- host/auth.sh@68 -- # keyid=1 00:13:30.323 15:36:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:30.323 15:36:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:30.323 15:36:31 -- common/autotest_common.sh@10 -- # set +x 00:13:30.323 15:36:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:30.323 15:36:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:30.323 15:36:31 -- nvmf/common.sh@717 -- # local ip 00:13:30.323 15:36:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:30.323 15:36:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:30.323 15:36:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:30.323 15:36:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:30.323 15:36:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:30.323 15:36:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:30.323 15:36:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:30.323 15:36:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:30.324 15:36:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:30.324 15:36:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:13:30.324 15:36:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:30.324 15:36:31 -- common/autotest_common.sh@10 -- # set +x 00:13:30.324 nvme0n1 00:13:30.324 15:36:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:30.324 15:36:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:30.324 15:36:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:30.324 15:36:31 -- common/autotest_common.sh@10 -- # set +x 00:13:30.324 15:36:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:30.324 15:36:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:30.324 15:36:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:30.324 15:36:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:30.324 15:36:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:30.324 15:36:31 -- common/autotest_common.sh@10 -- # set +x 00:13:30.324 15:36:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:30.324 15:36:31 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:13:30.324 15:36:31 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:13:30.324 15:36:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:30.324 15:36:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:13:30.324 15:36:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:30.324 15:36:31 -- host/auth.sh@44 -- # digest=sha256 00:13:30.324 15:36:31 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:13:30.324 15:36:31 -- host/auth.sh@44 -- # keyid=0 00:13:30.324 15:36:31 -- host/auth.sh@45 -- # key=DHHC-1:00:NzRiZGY4NzU4Y2M3OTkyYWExZTc4NDVlMzZlMDBiMTZcy3CZ: 00:13:30.324 15:36:31 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:13:30.324 15:36:31 -- host/auth.sh@48 -- # echo ffdhe2048 00:13:30.324 15:36:31 -- host/auth.sh@49 -- # echo DHHC-1:00:NzRiZGY4NzU4Y2M3OTkyYWExZTc4NDVlMzZlMDBiMTZcy3CZ: 00:13:30.324 15:36:31 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:13:30.324 15:36:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:30.324 15:36:31 -- host/auth.sh@68 -- # digest=sha256 00:13:30.324 15:36:31 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:13:30.324 15:36:31 -- host/auth.sh@68 -- # keyid=0 00:13:30.324 15:36:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:30.324 15:36:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:30.324 15:36:31 -- common/autotest_common.sh@10 -- # set +x 00:13:30.324 15:36:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:30.324 15:36:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:30.324 15:36:31 -- nvmf/common.sh@717 -- # local ip 00:13:30.582 15:36:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:30.582 15:36:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:30.582 15:36:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:30.582 15:36:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:30.582 15:36:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:30.582 15:36:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:30.582 15:36:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:30.582 15:36:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:30.582 15:36:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:30.582 15:36:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:13:30.582 15:36:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:30.582 15:36:31 -- common/autotest_common.sh@10 -- # set +x 00:13:30.582 nvme0n1 00:13:30.582 15:36:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:30.582 15:36:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:30.582 15:36:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:30.582 15:36:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:30.582 15:36:31 -- common/autotest_common.sh@10 -- # set +x 00:13:30.582 15:36:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:30.582 15:36:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:30.582 15:36:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:30.582 15:36:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:30.582 15:36:31 -- common/autotest_common.sh@10 -- # set +x 00:13:30.582 15:36:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:30.582 15:36:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:30.582 15:36:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:13:30.582 15:36:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:30.582 15:36:31 -- host/auth.sh@44 -- # digest=sha256 00:13:30.582 15:36:31 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:13:30.582 15:36:31 -- host/auth.sh@44 -- # keyid=1 00:13:30.582 15:36:31 -- host/auth.sh@45 -- # key=DHHC-1:00:MjliZjQ0ZTBmN2QxYTg3N2VkMzcwYzk0NTJiZjcwNzhkMzYzNzExYTkxOGI5ZDc1TVXigg==: 00:13:30.582 15:36:31 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:13:30.582 15:36:31 -- host/auth.sh@48 -- # echo ffdhe2048 00:13:30.582 15:36:31 -- host/auth.sh@49 -- # echo DHHC-1:00:MjliZjQ0ZTBmN2QxYTg3N2VkMzcwYzk0NTJiZjcwNzhkMzYzNzExYTkxOGI5ZDc1TVXigg==: 00:13:30.582 15:36:31 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:13:30.582 15:36:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:30.582 15:36:31 -- host/auth.sh@68 -- # digest=sha256 00:13:30.582 15:36:31 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:13:30.583 15:36:31 -- host/auth.sh@68 -- # keyid=1 00:13:30.583 15:36:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:30.583 15:36:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:30.583 15:36:31 -- common/autotest_common.sh@10 -- # set +x 00:13:30.583 15:36:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:30.583 15:36:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:30.583 15:36:31 -- nvmf/common.sh@717 -- # local ip 00:13:30.583 15:36:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:30.583 15:36:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:30.583 15:36:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:30.583 15:36:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:30.583 15:36:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:30.583 15:36:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:30.583 15:36:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:30.583 15:36:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:30.583 15:36:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:30.583 15:36:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:13:30.583 15:36:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:30.583 15:36:31 -- common/autotest_common.sh@10 -- # set +x 00:13:30.841 nvme0n1 00:13:30.841 15:36:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:30.841 15:36:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:30.841 15:36:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:30.841 15:36:32 -- common/autotest_common.sh@10 -- # set +x 00:13:30.841 15:36:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:30.841 15:36:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:30.841 15:36:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:30.841 15:36:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:30.841 15:36:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:30.841 15:36:32 -- common/autotest_common.sh@10 -- # set +x 00:13:30.841 15:36:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:30.841 15:36:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:30.841 15:36:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:13:30.841 15:36:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:30.841 15:36:32 -- host/auth.sh@44 -- # digest=sha256 00:13:30.841 15:36:32 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:13:30.841 15:36:32 -- host/auth.sh@44 -- # keyid=2 00:13:30.841 15:36:32 -- host/auth.sh@45 -- # key=DHHC-1:01:YTU4NTIwZjAxZmZiYWVmNWNkMzUxYjM5YWYyZDljZWbmb8i5: 00:13:30.841 15:36:32 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:13:30.841 15:36:32 -- host/auth.sh@48 -- # echo ffdhe2048 00:13:30.841 15:36:32 -- host/auth.sh@49 -- # echo DHHC-1:01:YTU4NTIwZjAxZmZiYWVmNWNkMzUxYjM5YWYyZDljZWbmb8i5: 00:13:30.841 15:36:32 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:13:30.841 15:36:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:30.841 15:36:32 -- host/auth.sh@68 -- # digest=sha256 00:13:30.841 15:36:32 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:13:30.841 15:36:32 -- host/auth.sh@68 -- # keyid=2 00:13:30.841 15:36:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:30.841 15:36:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:30.841 15:36:32 -- common/autotest_common.sh@10 -- # set +x 00:13:30.841 15:36:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:30.841 15:36:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:30.841 15:36:32 -- nvmf/common.sh@717 -- # local ip 00:13:30.841 15:36:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:30.841 15:36:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:30.841 15:36:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:30.841 15:36:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:30.841 15:36:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:30.841 15:36:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:30.841 15:36:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:30.841 15:36:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:30.841 15:36:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:30.841 15:36:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:13:30.841 15:36:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:30.841 15:36:32 -- common/autotest_common.sh@10 -- # set +x 00:13:30.841 nvme0n1 00:13:30.841 15:36:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:30.841 15:36:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:30.841 15:36:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:30.841 15:36:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:30.841 15:36:32 -- common/autotest_common.sh@10 -- # set +x 00:13:30.841 15:36:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:30.841 15:36:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:30.841 15:36:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:30.841 15:36:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:30.841 15:36:32 -- common/autotest_common.sh@10 -- # set +x 00:13:30.841 15:36:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:30.841 15:36:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:30.841 15:36:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:13:30.841 15:36:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:30.841 15:36:32 -- host/auth.sh@44 -- # digest=sha256 00:13:30.841 15:36:32 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:13:30.841 15:36:32 -- host/auth.sh@44 -- # keyid=3 00:13:30.841 15:36:32 -- host/auth.sh@45 -- # key=DHHC-1:02:NDhhZTA0YmFkY2RlMTQ5NjQ3ZmQyZGNjNDUwNjFhMDExZmNjNDlhMGM3MDhkNDBj2LibMg==: 00:13:30.841 15:36:32 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:13:30.841 15:36:32 -- host/auth.sh@48 -- # echo ffdhe2048 00:13:30.842 15:36:32 -- host/auth.sh@49 -- # echo DHHC-1:02:NDhhZTA0YmFkY2RlMTQ5NjQ3ZmQyZGNjNDUwNjFhMDExZmNjNDlhMGM3MDhkNDBj2LibMg==: 00:13:30.842 15:36:32 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:13:30.842 15:36:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:30.842 15:36:32 -- host/auth.sh@68 -- # digest=sha256 00:13:30.842 15:36:32 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:13:30.842 15:36:32 -- host/auth.sh@68 -- # keyid=3 00:13:30.842 15:36:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:30.842 15:36:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:30.842 15:36:32 -- common/autotest_common.sh@10 -- # set +x 00:13:31.100 15:36:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:31.100 15:36:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:31.100 15:36:32 -- nvmf/common.sh@717 -- # local ip 00:13:31.100 15:36:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:31.100 15:36:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:31.100 15:36:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:31.100 15:36:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:31.100 15:36:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:31.100 15:36:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:31.100 15:36:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:31.100 15:36:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:31.100 15:36:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:31.100 15:36:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:13:31.100 15:36:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:31.100 15:36:32 -- common/autotest_common.sh@10 -- # set +x 00:13:31.100 nvme0n1 00:13:31.100 15:36:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:31.100 15:36:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:31.100 15:36:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:31.100 15:36:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:31.100 15:36:32 -- common/autotest_common.sh@10 -- # set +x 00:13:31.100 15:36:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:31.100 15:36:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:31.100 15:36:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:31.100 15:36:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:31.100 15:36:32 -- common/autotest_common.sh@10 -- # set +x 00:13:31.100 15:36:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:31.100 15:36:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:31.100 15:36:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:13:31.100 15:36:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:31.100 15:36:32 -- host/auth.sh@44 -- # digest=sha256 00:13:31.100 15:36:32 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:13:31.100 15:36:32 -- host/auth.sh@44 -- # keyid=4 00:13:31.100 15:36:32 -- host/auth.sh@45 -- # key=DHHC-1:03:M2ViNjM0YzZmYzc2ZjdjMDhhZjg5NDA1N2NkOWYzZWFlMDhiNzMyNTA2OWVjZDllZTk2MGFlNjJkOTkyMjQ5MX6hZE4=: 00:13:31.100 15:36:32 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:13:31.100 15:36:32 -- host/auth.sh@48 -- # echo ffdhe2048 00:13:31.100 15:36:32 -- host/auth.sh@49 -- # echo DHHC-1:03:M2ViNjM0YzZmYzc2ZjdjMDhhZjg5NDA1N2NkOWYzZWFlMDhiNzMyNTA2OWVjZDllZTk2MGFlNjJkOTkyMjQ5MX6hZE4=: 00:13:31.100 15:36:32 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:13:31.100 15:36:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:31.100 15:36:32 -- host/auth.sh@68 -- # digest=sha256 00:13:31.100 15:36:32 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:13:31.100 15:36:32 -- host/auth.sh@68 -- # keyid=4 00:13:31.101 15:36:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:31.101 15:36:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:31.101 15:36:32 -- common/autotest_common.sh@10 -- # set +x 00:13:31.101 15:36:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:31.101 15:36:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:31.101 15:36:32 -- nvmf/common.sh@717 -- # local ip 00:13:31.101 15:36:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:31.101 15:36:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:31.101 15:36:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:31.101 15:36:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:31.101 15:36:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:31.101 15:36:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:31.101 15:36:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:31.101 15:36:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:31.101 15:36:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:31.101 15:36:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:13:31.101 15:36:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:31.101 15:36:32 -- common/autotest_common.sh@10 -- # set +x 00:13:31.359 nvme0n1 00:13:31.359 15:36:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:31.359 15:36:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:31.359 15:36:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:31.359 15:36:32 -- common/autotest_common.sh@10 -- # set +x 00:13:31.359 15:36:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:31.359 15:36:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:31.359 15:36:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:31.359 15:36:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:31.359 15:36:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:31.359 15:36:32 -- common/autotest_common.sh@10 -- # set +x 00:13:31.359 15:36:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:31.359 15:36:32 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:13:31.359 15:36:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:31.359 15:36:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:13:31.359 15:36:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:31.359 15:36:32 -- host/auth.sh@44 -- # digest=sha256 00:13:31.359 15:36:32 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:13:31.359 15:36:32 -- host/auth.sh@44 -- # keyid=0 00:13:31.359 15:36:32 -- host/auth.sh@45 -- # key=DHHC-1:00:NzRiZGY4NzU4Y2M3OTkyYWExZTc4NDVlMzZlMDBiMTZcy3CZ: 00:13:31.359 15:36:32 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:13:31.359 15:36:32 -- host/auth.sh@48 -- # echo ffdhe3072 00:13:31.617 15:36:32 -- host/auth.sh@49 -- # echo DHHC-1:00:NzRiZGY4NzU4Y2M3OTkyYWExZTc4NDVlMzZlMDBiMTZcy3CZ: 00:13:31.617 15:36:32 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:13:31.617 15:36:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:31.617 15:36:32 -- host/auth.sh@68 -- # digest=sha256 00:13:31.617 15:36:32 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:13:31.617 15:36:32 -- host/auth.sh@68 -- # keyid=0 00:13:31.617 15:36:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:31.617 15:36:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:31.618 15:36:32 -- common/autotest_common.sh@10 -- # set +x 00:13:31.618 15:36:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:31.618 15:36:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:31.618 15:36:32 -- nvmf/common.sh@717 -- # local ip 00:13:31.618 15:36:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:31.618 15:36:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:31.618 15:36:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:31.618 15:36:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:31.618 15:36:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:31.618 15:36:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:31.618 15:36:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:31.618 15:36:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:31.618 15:36:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:31.618 15:36:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:13:31.618 15:36:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:31.618 15:36:32 -- common/autotest_common.sh@10 -- # set +x 00:13:31.618 nvme0n1 00:13:31.618 15:36:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:31.618 15:36:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:31.618 15:36:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:31.618 15:36:33 -- common/autotest_common.sh@10 -- # set +x 00:13:31.618 15:36:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:31.618 15:36:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:31.876 15:36:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:31.876 15:36:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:31.876 15:36:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:31.876 15:36:33 -- common/autotest_common.sh@10 -- # set +x 00:13:31.876 15:36:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:31.876 15:36:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:31.876 15:36:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:13:31.876 15:36:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:31.876 15:36:33 -- host/auth.sh@44 -- # digest=sha256 00:13:31.876 15:36:33 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:13:31.876 15:36:33 -- host/auth.sh@44 -- # keyid=1 00:13:31.876 15:36:33 -- host/auth.sh@45 -- # key=DHHC-1:00:MjliZjQ0ZTBmN2QxYTg3N2VkMzcwYzk0NTJiZjcwNzhkMzYzNzExYTkxOGI5ZDc1TVXigg==: 00:13:31.876 15:36:33 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:13:31.876 15:36:33 -- host/auth.sh@48 -- # echo ffdhe3072 00:13:31.876 15:36:33 -- host/auth.sh@49 -- # echo DHHC-1:00:MjliZjQ0ZTBmN2QxYTg3N2VkMzcwYzk0NTJiZjcwNzhkMzYzNzExYTkxOGI5ZDc1TVXigg==: 00:13:31.876 15:36:33 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:13:31.876 15:36:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:31.876 15:36:33 -- host/auth.sh@68 -- # digest=sha256 00:13:31.876 15:36:33 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:13:31.876 15:36:33 -- host/auth.sh@68 -- # keyid=1 00:13:31.876 15:36:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:31.876 15:36:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:31.876 15:36:33 -- common/autotest_common.sh@10 -- # set +x 00:13:31.876 15:36:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:31.876 15:36:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:31.876 15:36:33 -- nvmf/common.sh@717 -- # local ip 00:13:31.876 15:36:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:31.876 15:36:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:31.876 15:36:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:31.876 15:36:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:31.876 15:36:33 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:31.876 15:36:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:31.876 15:36:33 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:31.876 15:36:33 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:31.876 15:36:33 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:31.876 15:36:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:13:31.876 15:36:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:31.876 15:36:33 -- common/autotest_common.sh@10 -- # set +x 00:13:31.876 nvme0n1 00:13:31.876 15:36:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:31.876 15:36:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:31.876 15:36:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:31.876 15:36:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:31.876 15:36:33 -- common/autotest_common.sh@10 -- # set +x 00:13:31.876 15:36:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:31.876 15:36:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:31.876 15:36:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:31.876 15:36:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:31.876 15:36:33 -- common/autotest_common.sh@10 -- # set +x 00:13:31.876 15:36:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:31.877 15:36:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:31.877 15:36:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:13:31.877 15:36:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:31.877 15:36:33 -- host/auth.sh@44 -- # digest=sha256 00:13:31.877 15:36:33 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:13:31.877 15:36:33 -- host/auth.sh@44 -- # keyid=2 00:13:31.877 15:36:33 -- host/auth.sh@45 -- # key=DHHC-1:01:YTU4NTIwZjAxZmZiYWVmNWNkMzUxYjM5YWYyZDljZWbmb8i5: 00:13:31.877 15:36:33 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:13:31.877 15:36:33 -- host/auth.sh@48 -- # echo ffdhe3072 00:13:31.877 15:36:33 -- host/auth.sh@49 -- # echo DHHC-1:01:YTU4NTIwZjAxZmZiYWVmNWNkMzUxYjM5YWYyZDljZWbmb8i5: 00:13:31.877 15:36:33 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:13:31.877 15:36:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:31.877 15:36:33 -- host/auth.sh@68 -- # digest=sha256 00:13:31.877 15:36:33 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:13:31.877 15:36:33 -- host/auth.sh@68 -- # keyid=2 00:13:31.877 15:36:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:31.877 15:36:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:31.877 15:36:33 -- common/autotest_common.sh@10 -- # set +x 00:13:31.877 15:36:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:31.877 15:36:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:31.877 15:36:33 -- nvmf/common.sh@717 -- # local ip 00:13:31.877 15:36:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:31.877 15:36:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:31.877 15:36:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:31.877 15:36:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:31.877 15:36:33 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:31.877 15:36:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:31.877 15:36:33 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:31.877 15:36:33 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:31.877 15:36:33 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:31.877 15:36:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:13:31.877 15:36:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:31.877 15:36:33 -- common/autotest_common.sh@10 -- # set +x 00:13:32.135 nvme0n1 00:13:32.135 15:36:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:32.135 15:36:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:32.135 15:36:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:32.135 15:36:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:32.135 15:36:33 -- common/autotest_common.sh@10 -- # set +x 00:13:32.135 15:36:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:32.135 15:36:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:32.135 15:36:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:32.135 15:36:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:32.135 15:36:33 -- common/autotest_common.sh@10 -- # set +x 00:13:32.135 15:36:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:32.135 15:36:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:32.135 15:36:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:13:32.135 15:36:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:32.135 15:36:33 -- host/auth.sh@44 -- # digest=sha256 00:13:32.135 15:36:33 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:13:32.135 15:36:33 -- host/auth.sh@44 -- # keyid=3 00:13:32.135 15:36:33 -- host/auth.sh@45 -- # key=DHHC-1:02:NDhhZTA0YmFkY2RlMTQ5NjQ3ZmQyZGNjNDUwNjFhMDExZmNjNDlhMGM3MDhkNDBj2LibMg==: 00:13:32.135 15:36:33 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:13:32.135 15:36:33 -- host/auth.sh@48 -- # echo ffdhe3072 00:13:32.135 15:36:33 -- host/auth.sh@49 -- # echo DHHC-1:02:NDhhZTA0YmFkY2RlMTQ5NjQ3ZmQyZGNjNDUwNjFhMDExZmNjNDlhMGM3MDhkNDBj2LibMg==: 00:13:32.135 15:36:33 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:13:32.135 15:36:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:32.135 15:36:33 -- host/auth.sh@68 -- # digest=sha256 00:13:32.135 15:36:33 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:13:32.135 15:36:33 -- host/auth.sh@68 -- # keyid=3 00:13:32.135 15:36:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:32.135 15:36:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:32.135 15:36:33 -- common/autotest_common.sh@10 -- # set +x 00:13:32.135 15:36:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:32.135 15:36:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:32.135 15:36:33 -- nvmf/common.sh@717 -- # local ip 00:13:32.135 15:36:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:32.135 15:36:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:32.135 15:36:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:32.135 15:36:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:32.135 15:36:33 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:32.135 15:36:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:32.135 15:36:33 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:32.135 15:36:33 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:32.135 15:36:33 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:32.135 15:36:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:13:32.135 15:36:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:32.135 15:36:33 -- common/autotest_common.sh@10 -- # set +x 00:13:32.394 nvme0n1 00:13:32.394 15:36:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:32.394 15:36:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:32.394 15:36:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:32.394 15:36:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:32.394 15:36:33 -- common/autotest_common.sh@10 -- # set +x 00:13:32.394 15:36:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:32.394 15:36:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:32.394 15:36:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:32.394 15:36:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:32.394 15:36:33 -- common/autotest_common.sh@10 -- # set +x 00:13:32.394 15:36:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:32.394 15:36:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:32.394 15:36:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:13:32.394 15:36:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:32.394 15:36:33 -- host/auth.sh@44 -- # digest=sha256 00:13:32.394 15:36:33 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:13:32.394 15:36:33 -- host/auth.sh@44 -- # keyid=4 00:13:32.394 15:36:33 -- host/auth.sh@45 -- # key=DHHC-1:03:M2ViNjM0YzZmYzc2ZjdjMDhhZjg5NDA1N2NkOWYzZWFlMDhiNzMyNTA2OWVjZDllZTk2MGFlNjJkOTkyMjQ5MX6hZE4=: 00:13:32.394 15:36:33 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:13:32.394 15:36:33 -- host/auth.sh@48 -- # echo ffdhe3072 00:13:32.394 15:36:33 -- host/auth.sh@49 -- # echo DHHC-1:03:M2ViNjM0YzZmYzc2ZjdjMDhhZjg5NDA1N2NkOWYzZWFlMDhiNzMyNTA2OWVjZDllZTk2MGFlNjJkOTkyMjQ5MX6hZE4=: 00:13:32.394 15:36:33 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:13:32.394 15:36:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:32.394 15:36:33 -- host/auth.sh@68 -- # digest=sha256 00:13:32.394 15:36:33 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:13:32.394 15:36:33 -- host/auth.sh@68 -- # keyid=4 00:13:32.394 15:36:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:32.394 15:36:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:32.394 15:36:33 -- common/autotest_common.sh@10 -- # set +x 00:13:32.394 15:36:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:32.394 15:36:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:32.394 15:36:33 -- nvmf/common.sh@717 -- # local ip 00:13:32.394 15:36:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:32.394 15:36:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:32.394 15:36:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:32.394 15:36:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:32.394 15:36:33 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:32.394 15:36:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:32.394 15:36:33 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:32.394 15:36:33 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:32.394 15:36:33 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:32.394 15:36:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:13:32.394 15:36:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:32.394 15:36:33 -- common/autotest_common.sh@10 -- # set +x 00:13:32.394 nvme0n1 00:13:32.394 15:36:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:32.394 15:36:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:32.394 15:36:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:32.394 15:36:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:32.394 15:36:33 -- common/autotest_common.sh@10 -- # set +x 00:13:32.652 15:36:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:32.652 15:36:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:32.652 15:36:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:32.652 15:36:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:32.652 15:36:33 -- common/autotest_common.sh@10 -- # set +x 00:13:32.652 15:36:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:32.652 15:36:33 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:13:32.652 15:36:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:32.652 15:36:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:13:32.652 15:36:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:32.652 15:36:33 -- host/auth.sh@44 -- # digest=sha256 00:13:32.652 15:36:33 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:13:32.652 15:36:33 -- host/auth.sh@44 -- # keyid=0 00:13:32.652 15:36:33 -- host/auth.sh@45 -- # key=DHHC-1:00:NzRiZGY4NzU4Y2M3OTkyYWExZTc4NDVlMzZlMDBiMTZcy3CZ: 00:13:32.652 15:36:33 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:13:32.652 15:36:33 -- host/auth.sh@48 -- # echo ffdhe4096 00:13:33.219 15:36:34 -- host/auth.sh@49 -- # echo DHHC-1:00:NzRiZGY4NzU4Y2M3OTkyYWExZTc4NDVlMzZlMDBiMTZcy3CZ: 00:13:33.219 15:36:34 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:13:33.219 15:36:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:33.219 15:36:34 -- host/auth.sh@68 -- # digest=sha256 00:13:33.219 15:36:34 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:13:33.219 15:36:34 -- host/auth.sh@68 -- # keyid=0 00:13:33.219 15:36:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:33.219 15:36:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:33.219 15:36:34 -- common/autotest_common.sh@10 -- # set +x 00:13:33.219 15:36:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:33.219 15:36:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:33.219 15:36:34 -- nvmf/common.sh@717 -- # local ip 00:13:33.219 15:36:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:33.219 15:36:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:33.219 15:36:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:33.219 15:36:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:33.219 15:36:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:33.219 15:36:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:33.219 15:36:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:33.219 15:36:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:33.219 15:36:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:33.219 15:36:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:13:33.219 15:36:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:33.219 15:36:34 -- common/autotest_common.sh@10 -- # set +x 00:13:33.219 nvme0n1 00:13:33.219 15:36:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:33.219 15:36:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:33.219 15:36:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:33.219 15:36:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:33.219 15:36:34 -- common/autotest_common.sh@10 -- # set +x 00:13:33.219 15:36:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:33.476 15:36:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:33.476 15:36:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:33.476 15:36:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:33.476 15:36:34 -- common/autotest_common.sh@10 -- # set +x 00:13:33.476 15:36:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:33.476 15:36:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:33.476 15:36:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:13:33.476 15:36:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:33.476 15:36:34 -- host/auth.sh@44 -- # digest=sha256 00:13:33.476 15:36:34 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:13:33.476 15:36:34 -- host/auth.sh@44 -- # keyid=1 00:13:33.476 15:36:34 -- host/auth.sh@45 -- # key=DHHC-1:00:MjliZjQ0ZTBmN2QxYTg3N2VkMzcwYzk0NTJiZjcwNzhkMzYzNzExYTkxOGI5ZDc1TVXigg==: 00:13:33.476 15:36:34 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:13:33.476 15:36:34 -- host/auth.sh@48 -- # echo ffdhe4096 00:13:33.476 15:36:34 -- host/auth.sh@49 -- # echo DHHC-1:00:MjliZjQ0ZTBmN2QxYTg3N2VkMzcwYzk0NTJiZjcwNzhkMzYzNzExYTkxOGI5ZDc1TVXigg==: 00:13:33.476 15:36:34 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:13:33.477 15:36:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:33.477 15:36:34 -- host/auth.sh@68 -- # digest=sha256 00:13:33.477 15:36:34 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:13:33.477 15:36:34 -- host/auth.sh@68 -- # keyid=1 00:13:33.477 15:36:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:33.477 15:36:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:33.477 15:36:34 -- common/autotest_common.sh@10 -- # set +x 00:13:33.477 15:36:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:33.477 15:36:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:33.477 15:36:34 -- nvmf/common.sh@717 -- # local ip 00:13:33.477 15:36:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:33.477 15:36:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:33.477 15:36:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:33.477 15:36:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:33.477 15:36:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:33.477 15:36:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:33.477 15:36:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:33.477 15:36:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:33.477 15:36:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:33.477 15:36:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:13:33.477 15:36:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:33.477 15:36:34 -- common/autotest_common.sh@10 -- # set +x 00:13:33.477 nvme0n1 00:13:33.477 15:36:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:33.734 15:36:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:33.734 15:36:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:33.734 15:36:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:33.734 15:36:34 -- common/autotest_common.sh@10 -- # set +x 00:13:33.734 15:36:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:33.734 15:36:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:33.734 15:36:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:33.734 15:36:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:33.734 15:36:34 -- common/autotest_common.sh@10 -- # set +x 00:13:33.734 15:36:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:33.734 15:36:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:33.734 15:36:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:13:33.734 15:36:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:33.734 15:36:34 -- host/auth.sh@44 -- # digest=sha256 00:13:33.734 15:36:34 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:13:33.734 15:36:34 -- host/auth.sh@44 -- # keyid=2 00:13:33.734 15:36:34 -- host/auth.sh@45 -- # key=DHHC-1:01:YTU4NTIwZjAxZmZiYWVmNWNkMzUxYjM5YWYyZDljZWbmb8i5: 00:13:33.734 15:36:34 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:13:33.734 15:36:34 -- host/auth.sh@48 -- # echo ffdhe4096 00:13:33.734 15:36:34 -- host/auth.sh@49 -- # echo DHHC-1:01:YTU4NTIwZjAxZmZiYWVmNWNkMzUxYjM5YWYyZDljZWbmb8i5: 00:13:33.734 15:36:34 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:13:33.734 15:36:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:33.734 15:36:34 -- host/auth.sh@68 -- # digest=sha256 00:13:33.734 15:36:34 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:13:33.734 15:36:34 -- host/auth.sh@68 -- # keyid=2 00:13:33.734 15:36:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:33.734 15:36:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:33.734 15:36:34 -- common/autotest_common.sh@10 -- # set +x 00:13:33.734 15:36:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:33.734 15:36:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:33.734 15:36:35 -- nvmf/common.sh@717 -- # local ip 00:13:33.734 15:36:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:33.734 15:36:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:33.734 15:36:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:33.734 15:36:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:33.734 15:36:35 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:33.734 15:36:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:33.734 15:36:35 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:33.734 15:36:35 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:33.734 15:36:35 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:33.734 15:36:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:13:33.734 15:36:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:33.734 15:36:35 -- common/autotest_common.sh@10 -- # set +x 00:13:33.991 nvme0n1 00:13:33.991 15:36:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:33.991 15:36:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:33.991 15:36:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:33.991 15:36:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:33.991 15:36:35 -- common/autotest_common.sh@10 -- # set +x 00:13:33.991 15:36:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:33.991 15:36:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:33.991 15:36:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:33.991 15:36:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:33.991 15:36:35 -- common/autotest_common.sh@10 -- # set +x 00:13:33.991 15:36:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:33.991 15:36:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:33.991 15:36:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:13:33.991 15:36:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:33.991 15:36:35 -- host/auth.sh@44 -- # digest=sha256 00:13:33.991 15:36:35 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:13:33.991 15:36:35 -- host/auth.sh@44 -- # keyid=3 00:13:33.991 15:36:35 -- host/auth.sh@45 -- # key=DHHC-1:02:NDhhZTA0YmFkY2RlMTQ5NjQ3ZmQyZGNjNDUwNjFhMDExZmNjNDlhMGM3MDhkNDBj2LibMg==: 00:13:33.991 15:36:35 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:13:33.991 15:36:35 -- host/auth.sh@48 -- # echo ffdhe4096 00:13:33.991 15:36:35 -- host/auth.sh@49 -- # echo DHHC-1:02:NDhhZTA0YmFkY2RlMTQ5NjQ3ZmQyZGNjNDUwNjFhMDExZmNjNDlhMGM3MDhkNDBj2LibMg==: 00:13:33.991 15:36:35 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:13:33.991 15:36:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:33.991 15:36:35 -- host/auth.sh@68 -- # digest=sha256 00:13:33.991 15:36:35 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:13:33.991 15:36:35 -- host/auth.sh@68 -- # keyid=3 00:13:33.991 15:36:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:33.991 15:36:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:33.991 15:36:35 -- common/autotest_common.sh@10 -- # set +x 00:13:33.991 15:36:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:33.991 15:36:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:33.991 15:36:35 -- nvmf/common.sh@717 -- # local ip 00:13:33.991 15:36:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:33.991 15:36:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:33.991 15:36:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:33.991 15:36:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:33.991 15:36:35 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:33.991 15:36:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:33.991 15:36:35 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:33.991 15:36:35 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:33.991 15:36:35 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:33.991 15:36:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:13:33.991 15:36:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:33.991 15:36:35 -- common/autotest_common.sh@10 -- # set +x 00:13:34.249 nvme0n1 00:13:34.249 15:36:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:34.249 15:36:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:34.249 15:36:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:34.249 15:36:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:34.249 15:36:35 -- common/autotest_common.sh@10 -- # set +x 00:13:34.249 15:36:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:34.249 15:36:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:34.249 15:36:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:34.249 15:36:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:34.249 15:36:35 -- common/autotest_common.sh@10 -- # set +x 00:13:34.249 15:36:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:34.249 15:36:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:34.249 15:36:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:13:34.249 15:36:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:34.249 15:36:35 -- host/auth.sh@44 -- # digest=sha256 00:13:34.249 15:36:35 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:13:34.249 15:36:35 -- host/auth.sh@44 -- # keyid=4 00:13:34.249 15:36:35 -- host/auth.sh@45 -- # key=DHHC-1:03:M2ViNjM0YzZmYzc2ZjdjMDhhZjg5NDA1N2NkOWYzZWFlMDhiNzMyNTA2OWVjZDllZTk2MGFlNjJkOTkyMjQ5MX6hZE4=: 00:13:34.249 15:36:35 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:13:34.249 15:36:35 -- host/auth.sh@48 -- # echo ffdhe4096 00:13:34.249 15:36:35 -- host/auth.sh@49 -- # echo DHHC-1:03:M2ViNjM0YzZmYzc2ZjdjMDhhZjg5NDA1N2NkOWYzZWFlMDhiNzMyNTA2OWVjZDllZTk2MGFlNjJkOTkyMjQ5MX6hZE4=: 00:13:34.249 15:36:35 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:13:34.249 15:36:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:34.249 15:36:35 -- host/auth.sh@68 -- # digest=sha256 00:13:34.249 15:36:35 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:13:34.249 15:36:35 -- host/auth.sh@68 -- # keyid=4 00:13:34.249 15:36:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:34.249 15:36:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:34.249 15:36:35 -- common/autotest_common.sh@10 -- # set +x 00:13:34.249 15:36:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:34.249 15:36:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:34.249 15:36:35 -- nvmf/common.sh@717 -- # local ip 00:13:34.249 15:36:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:34.249 15:36:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:34.249 15:36:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:34.249 15:36:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:34.249 15:36:35 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:34.249 15:36:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:34.249 15:36:35 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:34.249 15:36:35 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:34.249 15:36:35 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:34.249 15:36:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:13:34.249 15:36:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:34.249 15:36:35 -- common/autotest_common.sh@10 -- # set +x 00:13:34.506 nvme0n1 00:13:34.506 15:36:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:34.506 15:36:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:34.506 15:36:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:34.506 15:36:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:34.506 15:36:35 -- common/autotest_common.sh@10 -- # set +x 00:13:34.506 15:36:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:34.506 15:36:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:34.506 15:36:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:34.506 15:36:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:34.506 15:36:35 -- common/autotest_common.sh@10 -- # set +x 00:13:34.506 15:36:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:34.506 15:36:35 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:13:34.506 15:36:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:34.506 15:36:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:13:34.506 15:36:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:34.506 15:36:35 -- host/auth.sh@44 -- # digest=sha256 00:13:34.506 15:36:35 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:13:34.506 15:36:35 -- host/auth.sh@44 -- # keyid=0 00:13:34.506 15:36:35 -- host/auth.sh@45 -- # key=DHHC-1:00:NzRiZGY4NzU4Y2M3OTkyYWExZTc4NDVlMzZlMDBiMTZcy3CZ: 00:13:34.506 15:36:35 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:13:34.506 15:36:35 -- host/auth.sh@48 -- # echo ffdhe6144 00:13:36.403 15:36:37 -- host/auth.sh@49 -- # echo DHHC-1:00:NzRiZGY4NzU4Y2M3OTkyYWExZTc4NDVlMzZlMDBiMTZcy3CZ: 00:13:36.403 15:36:37 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:13:36.403 15:36:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:36.403 15:36:37 -- host/auth.sh@68 -- # digest=sha256 00:13:36.404 15:36:37 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:13:36.404 15:36:37 -- host/auth.sh@68 -- # keyid=0 00:13:36.404 15:36:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:36.404 15:36:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:36.404 15:36:37 -- common/autotest_common.sh@10 -- # set +x 00:13:36.404 15:36:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:36.404 15:36:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:36.404 15:36:37 -- nvmf/common.sh@717 -- # local ip 00:13:36.404 15:36:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:36.404 15:36:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:36.404 15:36:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:36.404 15:36:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:36.404 15:36:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:36.404 15:36:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:36.404 15:36:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:36.404 15:36:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:36.404 15:36:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:36.404 15:36:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:13:36.404 15:36:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:36.404 15:36:37 -- common/autotest_common.sh@10 -- # set +x 00:13:36.664 nvme0n1 00:13:36.664 15:36:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:36.664 15:36:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:36.664 15:36:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:36.664 15:36:37 -- common/autotest_common.sh@10 -- # set +x 00:13:36.664 15:36:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:36.664 15:36:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:36.664 15:36:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:36.664 15:36:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:36.664 15:36:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:36.664 15:36:37 -- common/autotest_common.sh@10 -- # set +x 00:13:36.664 15:36:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:36.664 15:36:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:36.664 15:36:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:13:36.664 15:36:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:36.664 15:36:37 -- host/auth.sh@44 -- # digest=sha256 00:13:36.664 15:36:37 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:13:36.664 15:36:37 -- host/auth.sh@44 -- # keyid=1 00:13:36.664 15:36:37 -- host/auth.sh@45 -- # key=DHHC-1:00:MjliZjQ0ZTBmN2QxYTg3N2VkMzcwYzk0NTJiZjcwNzhkMzYzNzExYTkxOGI5ZDc1TVXigg==: 00:13:36.664 15:36:37 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:13:36.664 15:36:37 -- host/auth.sh@48 -- # echo ffdhe6144 00:13:36.664 15:36:37 -- host/auth.sh@49 -- # echo DHHC-1:00:MjliZjQ0ZTBmN2QxYTg3N2VkMzcwYzk0NTJiZjcwNzhkMzYzNzExYTkxOGI5ZDc1TVXigg==: 00:13:36.664 15:36:37 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:13:36.664 15:36:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:36.664 15:36:37 -- host/auth.sh@68 -- # digest=sha256 00:13:36.664 15:36:37 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:13:36.664 15:36:37 -- host/auth.sh@68 -- # keyid=1 00:13:36.664 15:36:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:36.664 15:36:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:36.664 15:36:37 -- common/autotest_common.sh@10 -- # set +x 00:13:36.664 15:36:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:36.664 15:36:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:36.664 15:36:37 -- nvmf/common.sh@717 -- # local ip 00:13:36.664 15:36:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:36.664 15:36:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:36.664 15:36:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:36.664 15:36:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:36.664 15:36:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:36.664 15:36:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:36.664 15:36:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:36.664 15:36:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:36.664 15:36:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:36.664 15:36:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:13:36.664 15:36:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:36.664 15:36:37 -- common/autotest_common.sh@10 -- # set +x 00:13:36.922 nvme0n1 00:13:36.922 15:36:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:36.922 15:36:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:36.922 15:36:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:36.922 15:36:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:36.922 15:36:38 -- common/autotest_common.sh@10 -- # set +x 00:13:37.180 15:36:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:37.180 15:36:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:37.180 15:36:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:37.180 15:36:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:37.180 15:36:38 -- common/autotest_common.sh@10 -- # set +x 00:13:37.180 15:36:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:37.180 15:36:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:37.180 15:36:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:13:37.180 15:36:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:37.180 15:36:38 -- host/auth.sh@44 -- # digest=sha256 00:13:37.180 15:36:38 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:13:37.180 15:36:38 -- host/auth.sh@44 -- # keyid=2 00:13:37.180 15:36:38 -- host/auth.sh@45 -- # key=DHHC-1:01:YTU4NTIwZjAxZmZiYWVmNWNkMzUxYjM5YWYyZDljZWbmb8i5: 00:13:37.180 15:36:38 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:13:37.180 15:36:38 -- host/auth.sh@48 -- # echo ffdhe6144 00:13:37.180 15:36:38 -- host/auth.sh@49 -- # echo DHHC-1:01:YTU4NTIwZjAxZmZiYWVmNWNkMzUxYjM5YWYyZDljZWbmb8i5: 00:13:37.180 15:36:38 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:13:37.180 15:36:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:37.180 15:36:38 -- host/auth.sh@68 -- # digest=sha256 00:13:37.180 15:36:38 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:13:37.180 15:36:38 -- host/auth.sh@68 -- # keyid=2 00:13:37.180 15:36:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:37.180 15:36:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:37.180 15:36:38 -- common/autotest_common.sh@10 -- # set +x 00:13:37.180 15:36:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:37.180 15:36:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:37.180 15:36:38 -- nvmf/common.sh@717 -- # local ip 00:13:37.180 15:36:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:37.180 15:36:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:37.180 15:36:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:37.180 15:36:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:37.180 15:36:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:37.180 15:36:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:37.180 15:36:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:37.180 15:36:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:37.180 15:36:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:37.180 15:36:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:13:37.180 15:36:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:37.180 15:36:38 -- common/autotest_common.sh@10 -- # set +x 00:13:37.438 nvme0n1 00:13:37.438 15:36:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:37.438 15:36:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:37.438 15:36:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:37.438 15:36:38 -- common/autotest_common.sh@10 -- # set +x 00:13:37.438 15:36:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:37.438 15:36:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:37.438 15:36:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:37.438 15:36:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:37.438 15:36:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:37.438 15:36:38 -- common/autotest_common.sh@10 -- # set +x 00:13:37.438 15:36:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:37.438 15:36:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:37.438 15:36:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:13:37.438 15:36:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:37.438 15:36:38 -- host/auth.sh@44 -- # digest=sha256 00:13:37.438 15:36:38 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:13:37.438 15:36:38 -- host/auth.sh@44 -- # keyid=3 00:13:37.438 15:36:38 -- host/auth.sh@45 -- # key=DHHC-1:02:NDhhZTA0YmFkY2RlMTQ5NjQ3ZmQyZGNjNDUwNjFhMDExZmNjNDlhMGM3MDhkNDBj2LibMg==: 00:13:37.438 15:36:38 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:13:37.438 15:36:38 -- host/auth.sh@48 -- # echo ffdhe6144 00:13:37.438 15:36:38 -- host/auth.sh@49 -- # echo DHHC-1:02:NDhhZTA0YmFkY2RlMTQ5NjQ3ZmQyZGNjNDUwNjFhMDExZmNjNDlhMGM3MDhkNDBj2LibMg==: 00:13:37.438 15:36:38 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:13:37.438 15:36:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:37.438 15:36:38 -- host/auth.sh@68 -- # digest=sha256 00:13:37.438 15:36:38 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:13:37.438 15:36:38 -- host/auth.sh@68 -- # keyid=3 00:13:37.438 15:36:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:37.438 15:36:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:37.438 15:36:38 -- common/autotest_common.sh@10 -- # set +x 00:13:37.438 15:36:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:37.438 15:36:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:37.438 15:36:38 -- nvmf/common.sh@717 -- # local ip 00:13:37.438 15:36:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:37.438 15:36:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:37.438 15:36:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:37.438 15:36:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:37.438 15:36:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:37.438 15:36:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:37.438 15:36:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:37.438 15:36:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:37.438 15:36:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:37.438 15:36:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:13:37.438 15:36:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:37.438 15:36:38 -- common/autotest_common.sh@10 -- # set +x 00:13:38.004 nvme0n1 00:13:38.004 15:36:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:38.004 15:36:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:38.004 15:36:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:38.004 15:36:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:38.004 15:36:39 -- common/autotest_common.sh@10 -- # set +x 00:13:38.004 15:36:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:38.004 15:36:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:38.004 15:36:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:38.004 15:36:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:38.004 15:36:39 -- common/autotest_common.sh@10 -- # set +x 00:13:38.004 15:36:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:38.004 15:36:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:38.004 15:36:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:13:38.004 15:36:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:38.004 15:36:39 -- host/auth.sh@44 -- # digest=sha256 00:13:38.004 15:36:39 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:13:38.004 15:36:39 -- host/auth.sh@44 -- # keyid=4 00:13:38.004 15:36:39 -- host/auth.sh@45 -- # key=DHHC-1:03:M2ViNjM0YzZmYzc2ZjdjMDhhZjg5NDA1N2NkOWYzZWFlMDhiNzMyNTA2OWVjZDllZTk2MGFlNjJkOTkyMjQ5MX6hZE4=: 00:13:38.004 15:36:39 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:13:38.004 15:36:39 -- host/auth.sh@48 -- # echo ffdhe6144 00:13:38.004 15:36:39 -- host/auth.sh@49 -- # echo DHHC-1:03:M2ViNjM0YzZmYzc2ZjdjMDhhZjg5NDA1N2NkOWYzZWFlMDhiNzMyNTA2OWVjZDllZTk2MGFlNjJkOTkyMjQ5MX6hZE4=: 00:13:38.004 15:36:39 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:13:38.004 15:36:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:38.004 15:36:39 -- host/auth.sh@68 -- # digest=sha256 00:13:38.004 15:36:39 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:13:38.004 15:36:39 -- host/auth.sh@68 -- # keyid=4 00:13:38.004 15:36:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:38.004 15:36:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:38.004 15:36:39 -- common/autotest_common.sh@10 -- # set +x 00:13:38.004 15:36:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:38.004 15:36:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:38.004 15:36:39 -- nvmf/common.sh@717 -- # local ip 00:13:38.004 15:36:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:38.004 15:36:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:38.004 15:36:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:38.004 15:36:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:38.004 15:36:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:38.004 15:36:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:38.004 15:36:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:38.004 15:36:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:38.004 15:36:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:38.004 15:36:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:13:38.004 15:36:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:38.004 15:36:39 -- common/autotest_common.sh@10 -- # set +x 00:13:38.262 nvme0n1 00:13:38.262 15:36:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:38.262 15:36:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:38.262 15:36:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:38.262 15:36:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:38.262 15:36:39 -- common/autotest_common.sh@10 -- # set +x 00:13:38.262 15:36:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:38.262 15:36:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:38.262 15:36:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:38.262 15:36:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:38.262 15:36:39 -- common/autotest_common.sh@10 -- # set +x 00:13:38.521 15:36:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:38.521 15:36:39 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:13:38.521 15:36:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:38.521 15:36:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:13:38.521 15:36:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:38.521 15:36:39 -- host/auth.sh@44 -- # digest=sha256 00:13:38.521 15:36:39 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:13:38.521 15:36:39 -- host/auth.sh@44 -- # keyid=0 00:13:38.521 15:36:39 -- host/auth.sh@45 -- # key=DHHC-1:00:NzRiZGY4NzU4Y2M3OTkyYWExZTc4NDVlMzZlMDBiMTZcy3CZ: 00:13:38.521 15:36:39 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:13:38.521 15:36:39 -- host/auth.sh@48 -- # echo ffdhe8192 00:13:42.704 15:36:43 -- host/auth.sh@49 -- # echo DHHC-1:00:NzRiZGY4NzU4Y2M3OTkyYWExZTc4NDVlMzZlMDBiMTZcy3CZ: 00:13:42.704 15:36:43 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:13:42.704 15:36:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:42.704 15:36:43 -- host/auth.sh@68 -- # digest=sha256 00:13:42.704 15:36:43 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:13:42.704 15:36:43 -- host/auth.sh@68 -- # keyid=0 00:13:42.704 15:36:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:42.704 15:36:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:42.705 15:36:43 -- common/autotest_common.sh@10 -- # set +x 00:13:42.705 15:36:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:42.705 15:36:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:42.705 15:36:43 -- nvmf/common.sh@717 -- # local ip 00:13:42.705 15:36:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:42.705 15:36:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:42.705 15:36:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:42.705 15:36:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:42.705 15:36:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:42.705 15:36:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:42.705 15:36:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:42.705 15:36:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:42.705 15:36:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:42.705 15:36:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:13:42.705 15:36:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:42.705 15:36:43 -- common/autotest_common.sh@10 -- # set +x 00:13:42.705 nvme0n1 00:13:42.705 15:36:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:42.705 15:36:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:42.705 15:36:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:42.705 15:36:44 -- common/autotest_common.sh@10 -- # set +x 00:13:42.705 15:36:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:42.705 15:36:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:42.705 15:36:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:42.705 15:36:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:42.705 15:36:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:42.705 15:36:44 -- common/autotest_common.sh@10 -- # set +x 00:13:42.705 15:36:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:42.705 15:36:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:42.705 15:36:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:13:42.705 15:36:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:42.705 15:36:44 -- host/auth.sh@44 -- # digest=sha256 00:13:42.705 15:36:44 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:13:42.705 15:36:44 -- host/auth.sh@44 -- # keyid=1 00:13:42.705 15:36:44 -- host/auth.sh@45 -- # key=DHHC-1:00:MjliZjQ0ZTBmN2QxYTg3N2VkMzcwYzk0NTJiZjcwNzhkMzYzNzExYTkxOGI5ZDc1TVXigg==: 00:13:42.705 15:36:44 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:13:42.705 15:36:44 -- host/auth.sh@48 -- # echo ffdhe8192 00:13:42.705 15:36:44 -- host/auth.sh@49 -- # echo DHHC-1:00:MjliZjQ0ZTBmN2QxYTg3N2VkMzcwYzk0NTJiZjcwNzhkMzYzNzExYTkxOGI5ZDc1TVXigg==: 00:13:42.705 15:36:44 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:13:42.705 15:36:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:42.705 15:36:44 -- host/auth.sh@68 -- # digest=sha256 00:13:42.705 15:36:44 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:13:42.705 15:36:44 -- host/auth.sh@68 -- # keyid=1 00:13:42.705 15:36:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:42.705 15:36:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:42.705 15:36:44 -- common/autotest_common.sh@10 -- # set +x 00:13:42.705 15:36:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:42.705 15:36:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:42.705 15:36:44 -- nvmf/common.sh@717 -- # local ip 00:13:42.705 15:36:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:42.705 15:36:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:42.705 15:36:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:42.705 15:36:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:42.705 15:36:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:42.705 15:36:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:42.705 15:36:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:42.705 15:36:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:42.705 15:36:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:42.705 15:36:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:13:42.705 15:36:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:42.705 15:36:44 -- common/autotest_common.sh@10 -- # set +x 00:13:43.640 nvme0n1 00:13:43.640 15:36:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:43.640 15:36:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:43.640 15:36:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:43.640 15:36:44 -- common/autotest_common.sh@10 -- # set +x 00:13:43.640 15:36:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:43.640 15:36:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:43.640 15:36:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:43.640 15:36:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:43.640 15:36:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:43.640 15:36:44 -- common/autotest_common.sh@10 -- # set +x 00:13:43.640 15:36:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:43.640 15:36:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:43.640 15:36:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:13:43.640 15:36:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:43.641 15:36:44 -- host/auth.sh@44 -- # digest=sha256 00:13:43.641 15:36:44 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:13:43.641 15:36:44 -- host/auth.sh@44 -- # keyid=2 00:13:43.641 15:36:44 -- host/auth.sh@45 -- # key=DHHC-1:01:YTU4NTIwZjAxZmZiYWVmNWNkMzUxYjM5YWYyZDljZWbmb8i5: 00:13:43.641 15:36:44 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:13:43.641 15:36:44 -- host/auth.sh@48 -- # echo ffdhe8192 00:13:43.641 15:36:44 -- host/auth.sh@49 -- # echo DHHC-1:01:YTU4NTIwZjAxZmZiYWVmNWNkMzUxYjM5YWYyZDljZWbmb8i5: 00:13:43.641 15:36:44 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:13:43.641 15:36:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:43.641 15:36:44 -- host/auth.sh@68 -- # digest=sha256 00:13:43.641 15:36:44 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:13:43.641 15:36:44 -- host/auth.sh@68 -- # keyid=2 00:13:43.641 15:36:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:43.641 15:36:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:43.641 15:36:44 -- common/autotest_common.sh@10 -- # set +x 00:13:43.641 15:36:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:43.641 15:36:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:43.641 15:36:44 -- nvmf/common.sh@717 -- # local ip 00:13:43.641 15:36:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:43.641 15:36:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:43.641 15:36:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:43.641 15:36:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:43.641 15:36:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:43.641 15:36:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:43.641 15:36:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:43.641 15:36:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:43.641 15:36:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:43.641 15:36:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:13:43.641 15:36:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:43.641 15:36:44 -- common/autotest_common.sh@10 -- # set +x 00:13:44.207 nvme0n1 00:13:44.207 15:36:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:44.207 15:36:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:44.207 15:36:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:44.207 15:36:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:44.207 15:36:45 -- common/autotest_common.sh@10 -- # set +x 00:13:44.207 15:36:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:44.207 15:36:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:44.207 15:36:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:44.207 15:36:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:44.207 15:36:45 -- common/autotest_common.sh@10 -- # set +x 00:13:44.207 15:36:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:44.207 15:36:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:44.207 15:36:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:13:44.207 15:36:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:44.207 15:36:45 -- host/auth.sh@44 -- # digest=sha256 00:13:44.207 15:36:45 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:13:44.207 15:36:45 -- host/auth.sh@44 -- # keyid=3 00:13:44.207 15:36:45 -- host/auth.sh@45 -- # key=DHHC-1:02:NDhhZTA0YmFkY2RlMTQ5NjQ3ZmQyZGNjNDUwNjFhMDExZmNjNDlhMGM3MDhkNDBj2LibMg==: 00:13:44.207 15:36:45 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:13:44.207 15:36:45 -- host/auth.sh@48 -- # echo ffdhe8192 00:13:44.207 15:36:45 -- host/auth.sh@49 -- # echo DHHC-1:02:NDhhZTA0YmFkY2RlMTQ5NjQ3ZmQyZGNjNDUwNjFhMDExZmNjNDlhMGM3MDhkNDBj2LibMg==: 00:13:44.207 15:36:45 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:13:44.207 15:36:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:44.207 15:36:45 -- host/auth.sh@68 -- # digest=sha256 00:13:44.207 15:36:45 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:13:44.207 15:36:45 -- host/auth.sh@68 -- # keyid=3 00:13:44.207 15:36:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:44.207 15:36:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:44.207 15:36:45 -- common/autotest_common.sh@10 -- # set +x 00:13:44.207 15:36:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:44.207 15:36:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:44.207 15:36:45 -- nvmf/common.sh@717 -- # local ip 00:13:44.207 15:36:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:44.207 15:36:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:44.207 15:36:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:44.207 15:36:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:44.207 15:36:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:44.207 15:36:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:44.207 15:36:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:44.207 15:36:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:44.207 15:36:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:44.207 15:36:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:13:44.207 15:36:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:44.207 15:36:45 -- common/autotest_common.sh@10 -- # set +x 00:13:44.775 nvme0n1 00:13:44.775 15:36:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:44.775 15:36:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:44.775 15:36:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:44.775 15:36:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:44.775 15:36:46 -- common/autotest_common.sh@10 -- # set +x 00:13:44.775 15:36:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:44.775 15:36:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:44.775 15:36:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:44.775 15:36:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:44.775 15:36:46 -- common/autotest_common.sh@10 -- # set +x 00:13:44.775 15:36:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:44.775 15:36:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:44.775 15:36:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:13:44.775 15:36:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:44.775 15:36:46 -- host/auth.sh@44 -- # digest=sha256 00:13:44.775 15:36:46 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:13:44.775 15:36:46 -- host/auth.sh@44 -- # keyid=4 00:13:44.775 15:36:46 -- host/auth.sh@45 -- # key=DHHC-1:03:M2ViNjM0YzZmYzc2ZjdjMDhhZjg5NDA1N2NkOWYzZWFlMDhiNzMyNTA2OWVjZDllZTk2MGFlNjJkOTkyMjQ5MX6hZE4=: 00:13:44.775 15:36:46 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:13:44.775 15:36:46 -- host/auth.sh@48 -- # echo ffdhe8192 00:13:44.775 15:36:46 -- host/auth.sh@49 -- # echo DHHC-1:03:M2ViNjM0YzZmYzc2ZjdjMDhhZjg5NDA1N2NkOWYzZWFlMDhiNzMyNTA2OWVjZDllZTk2MGFlNjJkOTkyMjQ5MX6hZE4=: 00:13:44.775 15:36:46 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:13:44.775 15:36:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:44.775 15:36:46 -- host/auth.sh@68 -- # digest=sha256 00:13:44.775 15:36:46 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:13:44.775 15:36:46 -- host/auth.sh@68 -- # keyid=4 00:13:44.775 15:36:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:44.775 15:36:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:44.775 15:36:46 -- common/autotest_common.sh@10 -- # set +x 00:13:44.775 15:36:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:44.775 15:36:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:44.775 15:36:46 -- nvmf/common.sh@717 -- # local ip 00:13:44.775 15:36:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:44.775 15:36:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:44.775 15:36:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:44.775 15:36:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:44.775 15:36:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:44.775 15:36:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:44.775 15:36:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:44.775 15:36:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:44.775 15:36:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:44.775 15:36:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:13:44.775 15:36:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:44.775 15:36:46 -- common/autotest_common.sh@10 -- # set +x 00:13:45.710 nvme0n1 00:13:45.710 15:36:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:45.710 15:36:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:45.710 15:36:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:45.710 15:36:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:45.710 15:36:46 -- common/autotest_common.sh@10 -- # set +x 00:13:45.710 15:36:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:45.710 15:36:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:45.710 15:36:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:45.710 15:36:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:45.710 15:36:46 -- common/autotest_common.sh@10 -- # set +x 00:13:45.710 15:36:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:45.710 15:36:46 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:13:45.710 15:36:46 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:13:45.710 15:36:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:45.710 15:36:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:13:45.710 15:36:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:45.710 15:36:46 -- host/auth.sh@44 -- # digest=sha384 00:13:45.710 15:36:46 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:13:45.710 15:36:46 -- host/auth.sh@44 -- # keyid=0 00:13:45.710 15:36:46 -- host/auth.sh@45 -- # key=DHHC-1:00:NzRiZGY4NzU4Y2M3OTkyYWExZTc4NDVlMzZlMDBiMTZcy3CZ: 00:13:45.710 15:36:46 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:13:45.710 15:36:46 -- host/auth.sh@48 -- # echo ffdhe2048 00:13:45.710 15:36:46 -- host/auth.sh@49 -- # echo DHHC-1:00:NzRiZGY4NzU4Y2M3OTkyYWExZTc4NDVlMzZlMDBiMTZcy3CZ: 00:13:45.710 15:36:46 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:13:45.710 15:36:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:45.710 15:36:46 -- host/auth.sh@68 -- # digest=sha384 00:13:45.710 15:36:46 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:13:45.710 15:36:46 -- host/auth.sh@68 -- # keyid=0 00:13:45.710 15:36:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:45.710 15:36:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:45.710 15:36:46 -- common/autotest_common.sh@10 -- # set +x 00:13:45.710 15:36:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:45.710 15:36:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:45.710 15:36:46 -- nvmf/common.sh@717 -- # local ip 00:13:45.711 15:36:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:45.711 15:36:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:45.711 15:36:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:45.711 15:36:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:45.711 15:36:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:45.711 15:36:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:45.711 15:36:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:45.711 15:36:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:45.711 15:36:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:45.711 15:36:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:13:45.711 15:36:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:45.711 15:36:46 -- common/autotest_common.sh@10 -- # set +x 00:13:45.711 nvme0n1 00:13:45.711 15:36:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:45.711 15:36:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:45.711 15:36:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:45.711 15:36:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:45.711 15:36:47 -- common/autotest_common.sh@10 -- # set +x 00:13:45.711 15:36:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:45.711 15:36:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:45.711 15:36:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:45.711 15:36:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:45.711 15:36:47 -- common/autotest_common.sh@10 -- # set +x 00:13:45.711 15:36:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:45.711 15:36:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:45.711 15:36:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:13:45.711 15:36:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:45.711 15:36:47 -- host/auth.sh@44 -- # digest=sha384 00:13:45.711 15:36:47 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:13:45.711 15:36:47 -- host/auth.sh@44 -- # keyid=1 00:13:45.711 15:36:47 -- host/auth.sh@45 -- # key=DHHC-1:00:MjliZjQ0ZTBmN2QxYTg3N2VkMzcwYzk0NTJiZjcwNzhkMzYzNzExYTkxOGI5ZDc1TVXigg==: 00:13:45.711 15:36:47 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:13:45.711 15:36:47 -- host/auth.sh@48 -- # echo ffdhe2048 00:13:45.711 15:36:47 -- host/auth.sh@49 -- # echo DHHC-1:00:MjliZjQ0ZTBmN2QxYTg3N2VkMzcwYzk0NTJiZjcwNzhkMzYzNzExYTkxOGI5ZDc1TVXigg==: 00:13:45.711 15:36:47 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:13:45.711 15:36:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:45.711 15:36:47 -- host/auth.sh@68 -- # digest=sha384 00:13:45.711 15:36:47 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:13:45.711 15:36:47 -- host/auth.sh@68 -- # keyid=1 00:13:45.711 15:36:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:45.711 15:36:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:45.711 15:36:47 -- common/autotest_common.sh@10 -- # set +x 00:13:45.711 15:36:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:45.711 15:36:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:45.711 15:36:47 -- nvmf/common.sh@717 -- # local ip 00:13:45.711 15:36:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:45.711 15:36:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:45.711 15:36:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:45.711 15:36:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:45.711 15:36:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:45.711 15:36:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:45.711 15:36:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:45.711 15:36:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:45.711 15:36:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:45.711 15:36:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:13:45.711 15:36:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:45.711 15:36:47 -- common/autotest_common.sh@10 -- # set +x 00:13:45.969 nvme0n1 00:13:45.969 15:36:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:45.969 15:36:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:45.969 15:36:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:45.969 15:36:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:45.969 15:36:47 -- common/autotest_common.sh@10 -- # set +x 00:13:45.969 15:36:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:45.969 15:36:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:45.969 15:36:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:45.969 15:36:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:45.969 15:36:47 -- common/autotest_common.sh@10 -- # set +x 00:13:45.969 15:36:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:45.969 15:36:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:45.969 15:36:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:13:45.969 15:36:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:45.969 15:36:47 -- host/auth.sh@44 -- # digest=sha384 00:13:45.969 15:36:47 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:13:45.969 15:36:47 -- host/auth.sh@44 -- # keyid=2 00:13:45.969 15:36:47 -- host/auth.sh@45 -- # key=DHHC-1:01:YTU4NTIwZjAxZmZiYWVmNWNkMzUxYjM5YWYyZDljZWbmb8i5: 00:13:45.969 15:36:47 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:13:45.969 15:36:47 -- host/auth.sh@48 -- # echo ffdhe2048 00:13:45.969 15:36:47 -- host/auth.sh@49 -- # echo DHHC-1:01:YTU4NTIwZjAxZmZiYWVmNWNkMzUxYjM5YWYyZDljZWbmb8i5: 00:13:45.969 15:36:47 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:13:45.969 15:36:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:45.969 15:36:47 -- host/auth.sh@68 -- # digest=sha384 00:13:45.969 15:36:47 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:13:45.969 15:36:47 -- host/auth.sh@68 -- # keyid=2 00:13:45.969 15:36:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:45.969 15:36:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:45.969 15:36:47 -- common/autotest_common.sh@10 -- # set +x 00:13:45.969 15:36:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:45.969 15:36:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:45.969 15:36:47 -- nvmf/common.sh@717 -- # local ip 00:13:45.969 15:36:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:45.969 15:36:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:45.969 15:36:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:45.969 15:36:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:45.969 15:36:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:45.969 15:36:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:45.969 15:36:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:45.969 15:36:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:45.969 15:36:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:45.969 15:36:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:13:45.969 15:36:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:45.969 15:36:47 -- common/autotest_common.sh@10 -- # set +x 00:13:45.969 nvme0n1 00:13:45.969 15:36:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:45.969 15:36:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:45.969 15:36:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:45.969 15:36:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:45.969 15:36:47 -- common/autotest_common.sh@10 -- # set +x 00:13:45.969 15:36:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:46.227 15:36:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:46.227 15:36:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:46.227 15:36:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:46.227 15:36:47 -- common/autotest_common.sh@10 -- # set +x 00:13:46.227 15:36:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:46.227 15:36:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:46.227 15:36:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:13:46.227 15:36:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:46.227 15:36:47 -- host/auth.sh@44 -- # digest=sha384 00:13:46.227 15:36:47 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:13:46.227 15:36:47 -- host/auth.sh@44 -- # keyid=3 00:13:46.227 15:36:47 -- host/auth.sh@45 -- # key=DHHC-1:02:NDhhZTA0YmFkY2RlMTQ5NjQ3ZmQyZGNjNDUwNjFhMDExZmNjNDlhMGM3MDhkNDBj2LibMg==: 00:13:46.227 15:36:47 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:13:46.227 15:36:47 -- host/auth.sh@48 -- # echo ffdhe2048 00:13:46.227 15:36:47 -- host/auth.sh@49 -- # echo DHHC-1:02:NDhhZTA0YmFkY2RlMTQ5NjQ3ZmQyZGNjNDUwNjFhMDExZmNjNDlhMGM3MDhkNDBj2LibMg==: 00:13:46.227 15:36:47 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:13:46.227 15:36:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:46.227 15:36:47 -- host/auth.sh@68 -- # digest=sha384 00:13:46.227 15:36:47 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:13:46.227 15:36:47 -- host/auth.sh@68 -- # keyid=3 00:13:46.227 15:36:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:46.227 15:36:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:46.227 15:36:47 -- common/autotest_common.sh@10 -- # set +x 00:13:46.227 15:36:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:46.227 15:36:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:46.227 15:36:47 -- nvmf/common.sh@717 -- # local ip 00:13:46.227 15:36:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:46.227 15:36:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:46.227 15:36:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:46.227 15:36:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:46.227 15:36:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:46.227 15:36:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:46.227 15:36:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:46.227 15:36:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:46.227 15:36:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:46.227 15:36:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:13:46.227 15:36:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:46.227 15:36:47 -- common/autotest_common.sh@10 -- # set +x 00:13:46.227 nvme0n1 00:13:46.227 15:36:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:46.227 15:36:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:46.227 15:36:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:46.227 15:36:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:46.227 15:36:47 -- common/autotest_common.sh@10 -- # set +x 00:13:46.227 15:36:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:46.227 15:36:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:46.227 15:36:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:46.227 15:36:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:46.227 15:36:47 -- common/autotest_common.sh@10 -- # set +x 00:13:46.227 15:36:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:46.227 15:36:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:46.227 15:36:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:13:46.227 15:36:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:46.227 15:36:47 -- host/auth.sh@44 -- # digest=sha384 00:13:46.227 15:36:47 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:13:46.227 15:36:47 -- host/auth.sh@44 -- # keyid=4 00:13:46.227 15:36:47 -- host/auth.sh@45 -- # key=DHHC-1:03:M2ViNjM0YzZmYzc2ZjdjMDhhZjg5NDA1N2NkOWYzZWFlMDhiNzMyNTA2OWVjZDllZTk2MGFlNjJkOTkyMjQ5MX6hZE4=: 00:13:46.227 15:36:47 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:13:46.227 15:36:47 -- host/auth.sh@48 -- # echo ffdhe2048 00:13:46.227 15:36:47 -- host/auth.sh@49 -- # echo DHHC-1:03:M2ViNjM0YzZmYzc2ZjdjMDhhZjg5NDA1N2NkOWYzZWFlMDhiNzMyNTA2OWVjZDllZTk2MGFlNjJkOTkyMjQ5MX6hZE4=: 00:13:46.227 15:36:47 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:13:46.227 15:36:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:46.227 15:36:47 -- host/auth.sh@68 -- # digest=sha384 00:13:46.227 15:36:47 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:13:46.227 15:36:47 -- host/auth.sh@68 -- # keyid=4 00:13:46.227 15:36:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:46.227 15:36:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:46.227 15:36:47 -- common/autotest_common.sh@10 -- # set +x 00:13:46.227 15:36:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:46.227 15:36:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:46.227 15:36:47 -- nvmf/common.sh@717 -- # local ip 00:13:46.227 15:36:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:46.227 15:36:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:46.227 15:36:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:46.227 15:36:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:46.227 15:36:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:46.227 15:36:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:46.227 15:36:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:46.227 15:36:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:46.227 15:36:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:46.227 15:36:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:13:46.227 15:36:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:46.227 15:36:47 -- common/autotest_common.sh@10 -- # set +x 00:13:46.484 nvme0n1 00:13:46.484 15:36:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:46.484 15:36:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:46.484 15:36:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:46.484 15:36:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:46.484 15:36:47 -- common/autotest_common.sh@10 -- # set +x 00:13:46.484 15:36:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:46.484 15:36:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:46.484 15:36:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:46.484 15:36:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:46.484 15:36:47 -- common/autotest_common.sh@10 -- # set +x 00:13:46.484 15:36:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:46.484 15:36:47 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:13:46.484 15:36:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:46.484 15:36:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:13:46.484 15:36:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:46.484 15:36:47 -- host/auth.sh@44 -- # digest=sha384 00:13:46.484 15:36:47 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:13:46.484 15:36:47 -- host/auth.sh@44 -- # keyid=0 00:13:46.484 15:36:47 -- host/auth.sh@45 -- # key=DHHC-1:00:NzRiZGY4NzU4Y2M3OTkyYWExZTc4NDVlMzZlMDBiMTZcy3CZ: 00:13:46.484 15:36:47 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:13:46.484 15:36:47 -- host/auth.sh@48 -- # echo ffdhe3072 00:13:46.484 15:36:47 -- host/auth.sh@49 -- # echo DHHC-1:00:NzRiZGY4NzU4Y2M3OTkyYWExZTc4NDVlMzZlMDBiMTZcy3CZ: 00:13:46.484 15:36:47 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:13:46.484 15:36:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:46.484 15:36:47 -- host/auth.sh@68 -- # digest=sha384 00:13:46.484 15:36:47 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:13:46.484 15:36:47 -- host/auth.sh@68 -- # keyid=0 00:13:46.484 15:36:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:46.484 15:36:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:46.484 15:36:47 -- common/autotest_common.sh@10 -- # set +x 00:13:46.484 15:36:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:46.484 15:36:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:46.484 15:36:47 -- nvmf/common.sh@717 -- # local ip 00:13:46.484 15:36:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:46.484 15:36:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:46.484 15:36:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:46.484 15:36:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:46.484 15:36:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:46.484 15:36:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:46.484 15:36:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:46.484 15:36:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:46.484 15:36:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:46.484 15:36:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:13:46.484 15:36:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:46.484 15:36:47 -- common/autotest_common.sh@10 -- # set +x 00:13:46.742 nvme0n1 00:13:46.742 15:36:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:46.742 15:36:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:46.742 15:36:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:46.742 15:36:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:46.742 15:36:47 -- common/autotest_common.sh@10 -- # set +x 00:13:46.742 15:36:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:46.742 15:36:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:46.742 15:36:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:46.742 15:36:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:46.742 15:36:47 -- common/autotest_common.sh@10 -- # set +x 00:13:46.742 15:36:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:46.742 15:36:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:46.742 15:36:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:13:46.742 15:36:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:46.742 15:36:47 -- host/auth.sh@44 -- # digest=sha384 00:13:46.742 15:36:47 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:13:46.742 15:36:47 -- host/auth.sh@44 -- # keyid=1 00:13:46.742 15:36:47 -- host/auth.sh@45 -- # key=DHHC-1:00:MjliZjQ0ZTBmN2QxYTg3N2VkMzcwYzk0NTJiZjcwNzhkMzYzNzExYTkxOGI5ZDc1TVXigg==: 00:13:46.742 15:36:47 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:13:46.742 15:36:47 -- host/auth.sh@48 -- # echo ffdhe3072 00:13:46.742 15:36:47 -- host/auth.sh@49 -- # echo DHHC-1:00:MjliZjQ0ZTBmN2QxYTg3N2VkMzcwYzk0NTJiZjcwNzhkMzYzNzExYTkxOGI5ZDc1TVXigg==: 00:13:46.742 15:36:47 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:13:46.742 15:36:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:46.742 15:36:47 -- host/auth.sh@68 -- # digest=sha384 00:13:46.742 15:36:47 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:13:46.742 15:36:47 -- host/auth.sh@68 -- # keyid=1 00:13:46.742 15:36:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:46.742 15:36:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:46.742 15:36:47 -- common/autotest_common.sh@10 -- # set +x 00:13:46.742 15:36:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:46.742 15:36:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:46.742 15:36:48 -- nvmf/common.sh@717 -- # local ip 00:13:46.742 15:36:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:46.742 15:36:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:46.742 15:36:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:46.742 15:36:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:46.742 15:36:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:46.742 15:36:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:46.742 15:36:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:46.742 15:36:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:46.742 15:36:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:46.742 15:36:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:13:46.742 15:36:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:46.742 15:36:48 -- common/autotest_common.sh@10 -- # set +x 00:13:46.742 nvme0n1 00:13:46.742 15:36:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:46.742 15:36:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:46.742 15:36:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:46.742 15:36:48 -- common/autotest_common.sh@10 -- # set +x 00:13:46.742 15:36:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:46.742 15:36:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:46.742 15:36:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:46.742 15:36:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:46.742 15:36:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:46.742 15:36:48 -- common/autotest_common.sh@10 -- # set +x 00:13:47.000 15:36:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:47.000 15:36:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:47.000 15:36:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:13:47.000 15:36:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:47.000 15:36:48 -- host/auth.sh@44 -- # digest=sha384 00:13:47.000 15:36:48 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:13:47.000 15:36:48 -- host/auth.sh@44 -- # keyid=2 00:13:47.000 15:36:48 -- host/auth.sh@45 -- # key=DHHC-1:01:YTU4NTIwZjAxZmZiYWVmNWNkMzUxYjM5YWYyZDljZWbmb8i5: 00:13:47.000 15:36:48 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:13:47.000 15:36:48 -- host/auth.sh@48 -- # echo ffdhe3072 00:13:47.000 15:36:48 -- host/auth.sh@49 -- # echo DHHC-1:01:YTU4NTIwZjAxZmZiYWVmNWNkMzUxYjM5YWYyZDljZWbmb8i5: 00:13:47.000 15:36:48 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:13:47.000 15:36:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:47.000 15:36:48 -- host/auth.sh@68 -- # digest=sha384 00:13:47.000 15:36:48 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:13:47.000 15:36:48 -- host/auth.sh@68 -- # keyid=2 00:13:47.000 15:36:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:47.000 15:36:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:47.000 15:36:48 -- common/autotest_common.sh@10 -- # set +x 00:13:47.000 15:36:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:47.000 15:36:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:47.000 15:36:48 -- nvmf/common.sh@717 -- # local ip 00:13:47.000 15:36:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:47.001 15:36:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:47.001 15:36:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:47.001 15:36:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:47.001 15:36:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:47.001 15:36:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:47.001 15:36:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:47.001 15:36:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:47.001 15:36:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:47.001 15:36:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:13:47.001 15:36:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:47.001 15:36:48 -- common/autotest_common.sh@10 -- # set +x 00:13:47.001 nvme0n1 00:13:47.001 15:36:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:47.001 15:36:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:47.001 15:36:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:47.001 15:36:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:47.001 15:36:48 -- common/autotest_common.sh@10 -- # set +x 00:13:47.001 15:36:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:47.001 15:36:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:47.001 15:36:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:47.001 15:36:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:47.001 15:36:48 -- common/autotest_common.sh@10 -- # set +x 00:13:47.001 15:36:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:47.001 15:36:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:47.001 15:36:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:13:47.001 15:36:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:47.001 15:36:48 -- host/auth.sh@44 -- # digest=sha384 00:13:47.001 15:36:48 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:13:47.001 15:36:48 -- host/auth.sh@44 -- # keyid=3 00:13:47.001 15:36:48 -- host/auth.sh@45 -- # key=DHHC-1:02:NDhhZTA0YmFkY2RlMTQ5NjQ3ZmQyZGNjNDUwNjFhMDExZmNjNDlhMGM3MDhkNDBj2LibMg==: 00:13:47.001 15:36:48 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:13:47.001 15:36:48 -- host/auth.sh@48 -- # echo ffdhe3072 00:13:47.001 15:36:48 -- host/auth.sh@49 -- # echo DHHC-1:02:NDhhZTA0YmFkY2RlMTQ5NjQ3ZmQyZGNjNDUwNjFhMDExZmNjNDlhMGM3MDhkNDBj2LibMg==: 00:13:47.001 15:36:48 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:13:47.001 15:36:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:47.001 15:36:48 -- host/auth.sh@68 -- # digest=sha384 00:13:47.001 15:36:48 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:13:47.001 15:36:48 -- host/auth.sh@68 -- # keyid=3 00:13:47.001 15:36:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:47.001 15:36:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:47.001 15:36:48 -- common/autotest_common.sh@10 -- # set +x 00:13:47.001 15:36:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:47.001 15:36:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:47.001 15:36:48 -- nvmf/common.sh@717 -- # local ip 00:13:47.001 15:36:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:47.001 15:36:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:47.001 15:36:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:47.001 15:36:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:47.001 15:36:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:47.001 15:36:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:47.001 15:36:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:47.001 15:36:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:47.001 15:36:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:47.001 15:36:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:13:47.001 15:36:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:47.001 15:36:48 -- common/autotest_common.sh@10 -- # set +x 00:13:47.259 nvme0n1 00:13:47.259 15:36:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:47.259 15:36:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:47.259 15:36:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:47.259 15:36:48 -- common/autotest_common.sh@10 -- # set +x 00:13:47.259 15:36:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:47.259 15:36:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:47.259 15:36:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:47.259 15:36:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:47.259 15:36:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:47.259 15:36:48 -- common/autotest_common.sh@10 -- # set +x 00:13:47.259 15:36:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:47.259 15:36:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:47.259 15:36:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:13:47.259 15:36:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:47.259 15:36:48 -- host/auth.sh@44 -- # digest=sha384 00:13:47.259 15:36:48 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:13:47.259 15:36:48 -- host/auth.sh@44 -- # keyid=4 00:13:47.259 15:36:48 -- host/auth.sh@45 -- # key=DHHC-1:03:M2ViNjM0YzZmYzc2ZjdjMDhhZjg5NDA1N2NkOWYzZWFlMDhiNzMyNTA2OWVjZDllZTk2MGFlNjJkOTkyMjQ5MX6hZE4=: 00:13:47.259 15:36:48 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:13:47.259 15:36:48 -- host/auth.sh@48 -- # echo ffdhe3072 00:13:47.259 15:36:48 -- host/auth.sh@49 -- # echo DHHC-1:03:M2ViNjM0YzZmYzc2ZjdjMDhhZjg5NDA1N2NkOWYzZWFlMDhiNzMyNTA2OWVjZDllZTk2MGFlNjJkOTkyMjQ5MX6hZE4=: 00:13:47.259 15:36:48 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:13:47.259 15:36:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:47.259 15:36:48 -- host/auth.sh@68 -- # digest=sha384 00:13:47.259 15:36:48 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:13:47.259 15:36:48 -- host/auth.sh@68 -- # keyid=4 00:13:47.259 15:36:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:47.259 15:36:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:47.259 15:36:48 -- common/autotest_common.sh@10 -- # set +x 00:13:47.259 15:36:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:47.259 15:36:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:47.259 15:36:48 -- nvmf/common.sh@717 -- # local ip 00:13:47.259 15:36:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:47.259 15:36:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:47.259 15:36:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:47.259 15:36:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:47.259 15:36:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:47.259 15:36:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:47.259 15:36:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:47.259 15:36:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:47.259 15:36:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:47.259 15:36:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:13:47.259 15:36:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:47.259 15:36:48 -- common/autotest_common.sh@10 -- # set +x 00:13:47.517 nvme0n1 00:13:47.517 15:36:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:47.517 15:36:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:47.517 15:36:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:47.517 15:36:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:47.517 15:36:48 -- common/autotest_common.sh@10 -- # set +x 00:13:47.517 15:36:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:47.517 15:36:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:47.517 15:36:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:47.517 15:36:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:47.517 15:36:48 -- common/autotest_common.sh@10 -- # set +x 00:13:47.517 15:36:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:47.517 15:36:48 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:13:47.517 15:36:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:47.517 15:36:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:13:47.517 15:36:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:47.517 15:36:48 -- host/auth.sh@44 -- # digest=sha384 00:13:47.517 15:36:48 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:13:47.517 15:36:48 -- host/auth.sh@44 -- # keyid=0 00:13:47.517 15:36:48 -- host/auth.sh@45 -- # key=DHHC-1:00:NzRiZGY4NzU4Y2M3OTkyYWExZTc4NDVlMzZlMDBiMTZcy3CZ: 00:13:47.517 15:36:48 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:13:47.517 15:36:48 -- host/auth.sh@48 -- # echo ffdhe4096 00:13:47.517 15:36:48 -- host/auth.sh@49 -- # echo DHHC-1:00:NzRiZGY4NzU4Y2M3OTkyYWExZTc4NDVlMzZlMDBiMTZcy3CZ: 00:13:47.517 15:36:48 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:13:47.517 15:36:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:47.517 15:36:48 -- host/auth.sh@68 -- # digest=sha384 00:13:47.517 15:36:48 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:13:47.517 15:36:48 -- host/auth.sh@68 -- # keyid=0 00:13:47.517 15:36:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:47.517 15:36:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:47.517 15:36:48 -- common/autotest_common.sh@10 -- # set +x 00:13:47.517 15:36:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:47.517 15:36:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:47.517 15:36:48 -- nvmf/common.sh@717 -- # local ip 00:13:47.517 15:36:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:47.517 15:36:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:47.517 15:36:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:47.517 15:36:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:47.517 15:36:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:47.517 15:36:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:47.517 15:36:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:47.518 15:36:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:47.518 15:36:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:47.518 15:36:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:13:47.518 15:36:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:47.518 15:36:48 -- common/autotest_common.sh@10 -- # set +x 00:13:47.776 nvme0n1 00:13:47.776 15:36:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:47.776 15:36:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:47.776 15:36:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:47.776 15:36:49 -- common/autotest_common.sh@10 -- # set +x 00:13:47.776 15:36:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:47.776 15:36:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:47.776 15:36:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:47.776 15:36:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:47.776 15:36:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:47.776 15:36:49 -- common/autotest_common.sh@10 -- # set +x 00:13:47.776 15:36:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:47.776 15:36:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:47.776 15:36:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:13:47.776 15:36:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:47.776 15:36:49 -- host/auth.sh@44 -- # digest=sha384 00:13:47.776 15:36:49 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:13:47.776 15:36:49 -- host/auth.sh@44 -- # keyid=1 00:13:47.776 15:36:49 -- host/auth.sh@45 -- # key=DHHC-1:00:MjliZjQ0ZTBmN2QxYTg3N2VkMzcwYzk0NTJiZjcwNzhkMzYzNzExYTkxOGI5ZDc1TVXigg==: 00:13:47.776 15:36:49 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:13:47.776 15:36:49 -- host/auth.sh@48 -- # echo ffdhe4096 00:13:47.776 15:36:49 -- host/auth.sh@49 -- # echo DHHC-1:00:MjliZjQ0ZTBmN2QxYTg3N2VkMzcwYzk0NTJiZjcwNzhkMzYzNzExYTkxOGI5ZDc1TVXigg==: 00:13:47.776 15:36:49 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:13:47.776 15:36:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:47.776 15:36:49 -- host/auth.sh@68 -- # digest=sha384 00:13:47.776 15:36:49 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:13:47.776 15:36:49 -- host/auth.sh@68 -- # keyid=1 00:13:47.776 15:36:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:47.776 15:36:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:47.776 15:36:49 -- common/autotest_common.sh@10 -- # set +x 00:13:47.776 15:36:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:47.776 15:36:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:47.776 15:36:49 -- nvmf/common.sh@717 -- # local ip 00:13:47.776 15:36:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:47.776 15:36:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:47.776 15:36:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:47.776 15:36:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:47.776 15:36:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:47.776 15:36:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:47.776 15:36:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:47.776 15:36:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:47.776 15:36:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:47.776 15:36:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:13:47.776 15:36:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:47.776 15:36:49 -- common/autotest_common.sh@10 -- # set +x 00:13:48.035 nvme0n1 00:13:48.035 15:36:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:48.035 15:36:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:48.035 15:36:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:48.035 15:36:49 -- common/autotest_common.sh@10 -- # set +x 00:13:48.035 15:36:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:48.035 15:36:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:48.035 15:36:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:48.035 15:36:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:48.035 15:36:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:48.035 15:36:49 -- common/autotest_common.sh@10 -- # set +x 00:13:48.035 15:36:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:48.035 15:36:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:48.035 15:36:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:13:48.035 15:36:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:48.035 15:36:49 -- host/auth.sh@44 -- # digest=sha384 00:13:48.035 15:36:49 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:13:48.035 15:36:49 -- host/auth.sh@44 -- # keyid=2 00:13:48.035 15:36:49 -- host/auth.sh@45 -- # key=DHHC-1:01:YTU4NTIwZjAxZmZiYWVmNWNkMzUxYjM5YWYyZDljZWbmb8i5: 00:13:48.035 15:36:49 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:13:48.035 15:36:49 -- host/auth.sh@48 -- # echo ffdhe4096 00:13:48.035 15:36:49 -- host/auth.sh@49 -- # echo DHHC-1:01:YTU4NTIwZjAxZmZiYWVmNWNkMzUxYjM5YWYyZDljZWbmb8i5: 00:13:48.035 15:36:49 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:13:48.035 15:36:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:48.035 15:36:49 -- host/auth.sh@68 -- # digest=sha384 00:13:48.035 15:36:49 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:13:48.035 15:36:49 -- host/auth.sh@68 -- # keyid=2 00:13:48.035 15:36:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:48.035 15:36:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:48.035 15:36:49 -- common/autotest_common.sh@10 -- # set +x 00:13:48.035 15:36:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:48.035 15:36:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:48.035 15:36:49 -- nvmf/common.sh@717 -- # local ip 00:13:48.035 15:36:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:48.035 15:36:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:48.035 15:36:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:48.035 15:36:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:48.035 15:36:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:48.035 15:36:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:48.035 15:36:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:48.035 15:36:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:48.035 15:36:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:48.035 15:36:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:13:48.035 15:36:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:48.035 15:36:49 -- common/autotest_common.sh@10 -- # set +x 00:13:48.294 nvme0n1 00:13:48.294 15:36:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:48.294 15:36:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:48.294 15:36:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:48.294 15:36:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:48.294 15:36:49 -- common/autotest_common.sh@10 -- # set +x 00:13:48.294 15:36:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:48.294 15:36:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:48.294 15:36:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:48.294 15:36:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:48.294 15:36:49 -- common/autotest_common.sh@10 -- # set +x 00:13:48.294 15:36:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:48.294 15:36:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:48.294 15:36:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:13:48.294 15:36:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:48.294 15:36:49 -- host/auth.sh@44 -- # digest=sha384 00:13:48.294 15:36:49 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:13:48.294 15:36:49 -- host/auth.sh@44 -- # keyid=3 00:13:48.294 15:36:49 -- host/auth.sh@45 -- # key=DHHC-1:02:NDhhZTA0YmFkY2RlMTQ5NjQ3ZmQyZGNjNDUwNjFhMDExZmNjNDlhMGM3MDhkNDBj2LibMg==: 00:13:48.294 15:36:49 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:13:48.294 15:36:49 -- host/auth.sh@48 -- # echo ffdhe4096 00:13:48.294 15:36:49 -- host/auth.sh@49 -- # echo DHHC-1:02:NDhhZTA0YmFkY2RlMTQ5NjQ3ZmQyZGNjNDUwNjFhMDExZmNjNDlhMGM3MDhkNDBj2LibMg==: 00:13:48.294 15:36:49 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:13:48.294 15:36:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:48.294 15:36:49 -- host/auth.sh@68 -- # digest=sha384 00:13:48.294 15:36:49 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:13:48.294 15:36:49 -- host/auth.sh@68 -- # keyid=3 00:13:48.294 15:36:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:48.294 15:36:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:48.294 15:36:49 -- common/autotest_common.sh@10 -- # set +x 00:13:48.294 15:36:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:48.294 15:36:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:48.294 15:36:49 -- nvmf/common.sh@717 -- # local ip 00:13:48.294 15:36:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:48.294 15:36:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:48.294 15:36:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:48.294 15:36:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:48.294 15:36:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:48.294 15:36:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:48.294 15:36:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:48.294 15:36:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:48.294 15:36:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:48.294 15:36:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:13:48.294 15:36:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:48.294 15:36:49 -- common/autotest_common.sh@10 -- # set +x 00:13:48.554 nvme0n1 00:13:48.554 15:36:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:48.554 15:36:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:48.554 15:36:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:48.554 15:36:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:48.554 15:36:49 -- common/autotest_common.sh@10 -- # set +x 00:13:48.554 15:36:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:48.554 15:36:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:48.554 15:36:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:48.554 15:36:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:48.554 15:36:49 -- common/autotest_common.sh@10 -- # set +x 00:13:48.554 15:36:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:48.554 15:36:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:48.554 15:36:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:13:48.554 15:36:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:48.554 15:36:49 -- host/auth.sh@44 -- # digest=sha384 00:13:48.554 15:36:49 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:13:48.554 15:36:49 -- host/auth.sh@44 -- # keyid=4 00:13:48.554 15:36:49 -- host/auth.sh@45 -- # key=DHHC-1:03:M2ViNjM0YzZmYzc2ZjdjMDhhZjg5NDA1N2NkOWYzZWFlMDhiNzMyNTA2OWVjZDllZTk2MGFlNjJkOTkyMjQ5MX6hZE4=: 00:13:48.554 15:36:49 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:13:48.554 15:36:49 -- host/auth.sh@48 -- # echo ffdhe4096 00:13:48.554 15:36:49 -- host/auth.sh@49 -- # echo DHHC-1:03:M2ViNjM0YzZmYzc2ZjdjMDhhZjg5NDA1N2NkOWYzZWFlMDhiNzMyNTA2OWVjZDllZTk2MGFlNjJkOTkyMjQ5MX6hZE4=: 00:13:48.554 15:36:49 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:13:48.554 15:36:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:48.554 15:36:49 -- host/auth.sh@68 -- # digest=sha384 00:13:48.554 15:36:49 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:13:48.554 15:36:49 -- host/auth.sh@68 -- # keyid=4 00:13:48.554 15:36:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:48.554 15:36:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:48.554 15:36:49 -- common/autotest_common.sh@10 -- # set +x 00:13:48.554 15:36:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:48.554 15:36:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:48.554 15:36:49 -- nvmf/common.sh@717 -- # local ip 00:13:48.554 15:36:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:48.554 15:36:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:48.554 15:36:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:48.554 15:36:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:48.554 15:36:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:48.554 15:36:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:48.554 15:36:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:48.554 15:36:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:48.554 15:36:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:48.554 15:36:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:13:48.554 15:36:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:48.554 15:36:49 -- common/autotest_common.sh@10 -- # set +x 00:13:48.813 nvme0n1 00:13:48.813 15:36:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:48.813 15:36:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:48.813 15:36:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:48.813 15:36:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:48.813 15:36:50 -- common/autotest_common.sh@10 -- # set +x 00:13:48.813 15:36:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:48.813 15:36:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:48.813 15:36:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:48.813 15:36:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:48.813 15:36:50 -- common/autotest_common.sh@10 -- # set +x 00:13:48.813 15:36:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:48.813 15:36:50 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:13:48.813 15:36:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:48.813 15:36:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:13:48.813 15:36:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:48.813 15:36:50 -- host/auth.sh@44 -- # digest=sha384 00:13:48.813 15:36:50 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:13:48.813 15:36:50 -- host/auth.sh@44 -- # keyid=0 00:13:48.813 15:36:50 -- host/auth.sh@45 -- # key=DHHC-1:00:NzRiZGY4NzU4Y2M3OTkyYWExZTc4NDVlMzZlMDBiMTZcy3CZ: 00:13:48.813 15:36:50 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:13:48.813 15:36:50 -- host/auth.sh@48 -- # echo ffdhe6144 00:13:48.813 15:36:50 -- host/auth.sh@49 -- # echo DHHC-1:00:NzRiZGY4NzU4Y2M3OTkyYWExZTc4NDVlMzZlMDBiMTZcy3CZ: 00:13:48.813 15:36:50 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:13:48.813 15:36:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:48.813 15:36:50 -- host/auth.sh@68 -- # digest=sha384 00:13:48.813 15:36:50 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:13:48.813 15:36:50 -- host/auth.sh@68 -- # keyid=0 00:13:48.813 15:36:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:48.813 15:36:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:48.813 15:36:50 -- common/autotest_common.sh@10 -- # set +x 00:13:48.813 15:36:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:48.813 15:36:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:48.813 15:36:50 -- nvmf/common.sh@717 -- # local ip 00:13:48.813 15:36:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:48.813 15:36:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:48.813 15:36:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:48.813 15:36:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:48.813 15:36:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:48.813 15:36:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:48.813 15:36:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:48.813 15:36:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:48.813 15:36:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:48.813 15:36:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:13:48.813 15:36:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:48.813 15:36:50 -- common/autotest_common.sh@10 -- # set +x 00:13:49.071 nvme0n1 00:13:49.071 15:36:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:49.071 15:36:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:49.071 15:36:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:49.071 15:36:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:49.071 15:36:50 -- common/autotest_common.sh@10 -- # set +x 00:13:49.071 15:36:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:49.329 15:36:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:49.329 15:36:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:49.329 15:36:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:49.329 15:36:50 -- common/autotest_common.sh@10 -- # set +x 00:13:49.329 15:36:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:49.330 15:36:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:49.330 15:36:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:13:49.330 15:36:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:49.330 15:36:50 -- host/auth.sh@44 -- # digest=sha384 00:13:49.330 15:36:50 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:13:49.330 15:36:50 -- host/auth.sh@44 -- # keyid=1 00:13:49.330 15:36:50 -- host/auth.sh@45 -- # key=DHHC-1:00:MjliZjQ0ZTBmN2QxYTg3N2VkMzcwYzk0NTJiZjcwNzhkMzYzNzExYTkxOGI5ZDc1TVXigg==: 00:13:49.330 15:36:50 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:13:49.330 15:36:50 -- host/auth.sh@48 -- # echo ffdhe6144 00:13:49.330 15:36:50 -- host/auth.sh@49 -- # echo DHHC-1:00:MjliZjQ0ZTBmN2QxYTg3N2VkMzcwYzk0NTJiZjcwNzhkMzYzNzExYTkxOGI5ZDc1TVXigg==: 00:13:49.330 15:36:50 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:13:49.330 15:36:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:49.330 15:36:50 -- host/auth.sh@68 -- # digest=sha384 00:13:49.330 15:36:50 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:13:49.330 15:36:50 -- host/auth.sh@68 -- # keyid=1 00:13:49.330 15:36:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:49.330 15:36:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:49.330 15:36:50 -- common/autotest_common.sh@10 -- # set +x 00:13:49.330 15:36:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:49.330 15:36:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:49.330 15:36:50 -- nvmf/common.sh@717 -- # local ip 00:13:49.330 15:36:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:49.330 15:36:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:49.330 15:36:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:49.330 15:36:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:49.330 15:36:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:49.330 15:36:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:49.330 15:36:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:49.330 15:36:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:49.330 15:36:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:49.330 15:36:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:13:49.330 15:36:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:49.330 15:36:50 -- common/autotest_common.sh@10 -- # set +x 00:13:49.589 nvme0n1 00:13:49.589 15:36:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:49.589 15:36:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:49.589 15:36:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:49.589 15:36:50 -- common/autotest_common.sh@10 -- # set +x 00:13:49.589 15:36:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:49.589 15:36:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:49.589 15:36:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:49.589 15:36:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:49.589 15:36:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:49.589 15:36:50 -- common/autotest_common.sh@10 -- # set +x 00:13:49.589 15:36:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:49.589 15:36:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:49.589 15:36:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:13:49.589 15:36:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:49.589 15:36:50 -- host/auth.sh@44 -- # digest=sha384 00:13:49.589 15:36:50 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:13:49.589 15:36:50 -- host/auth.sh@44 -- # keyid=2 00:13:49.589 15:36:50 -- host/auth.sh@45 -- # key=DHHC-1:01:YTU4NTIwZjAxZmZiYWVmNWNkMzUxYjM5YWYyZDljZWbmb8i5: 00:13:49.589 15:36:50 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:13:49.589 15:36:50 -- host/auth.sh@48 -- # echo ffdhe6144 00:13:49.589 15:36:50 -- host/auth.sh@49 -- # echo DHHC-1:01:YTU4NTIwZjAxZmZiYWVmNWNkMzUxYjM5YWYyZDljZWbmb8i5: 00:13:49.589 15:36:50 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:13:49.589 15:36:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:49.589 15:36:50 -- host/auth.sh@68 -- # digest=sha384 00:13:49.589 15:36:50 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:13:49.589 15:36:50 -- host/auth.sh@68 -- # keyid=2 00:13:49.589 15:36:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:49.589 15:36:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:49.589 15:36:50 -- common/autotest_common.sh@10 -- # set +x 00:13:49.589 15:36:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:49.589 15:36:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:49.589 15:36:50 -- nvmf/common.sh@717 -- # local ip 00:13:49.589 15:36:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:49.589 15:36:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:49.589 15:36:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:49.589 15:36:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:49.589 15:36:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:49.589 15:36:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:49.589 15:36:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:49.589 15:36:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:49.589 15:36:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:49.589 15:36:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:13:49.589 15:36:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:49.589 15:36:50 -- common/autotest_common.sh@10 -- # set +x 00:13:50.155 nvme0n1 00:13:50.155 15:36:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.155 15:36:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:50.155 15:36:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:50.155 15:36:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.155 15:36:51 -- common/autotest_common.sh@10 -- # set +x 00:13:50.155 15:36:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.155 15:36:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:50.155 15:36:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:50.155 15:36:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.155 15:36:51 -- common/autotest_common.sh@10 -- # set +x 00:13:50.155 15:36:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.155 15:36:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:50.155 15:36:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:13:50.155 15:36:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:50.155 15:36:51 -- host/auth.sh@44 -- # digest=sha384 00:13:50.155 15:36:51 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:13:50.155 15:36:51 -- host/auth.sh@44 -- # keyid=3 00:13:50.155 15:36:51 -- host/auth.sh@45 -- # key=DHHC-1:02:NDhhZTA0YmFkY2RlMTQ5NjQ3ZmQyZGNjNDUwNjFhMDExZmNjNDlhMGM3MDhkNDBj2LibMg==: 00:13:50.155 15:36:51 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:13:50.155 15:36:51 -- host/auth.sh@48 -- # echo ffdhe6144 00:13:50.155 15:36:51 -- host/auth.sh@49 -- # echo DHHC-1:02:NDhhZTA0YmFkY2RlMTQ5NjQ3ZmQyZGNjNDUwNjFhMDExZmNjNDlhMGM3MDhkNDBj2LibMg==: 00:13:50.155 15:36:51 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:13:50.155 15:36:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:50.155 15:36:51 -- host/auth.sh@68 -- # digest=sha384 00:13:50.155 15:36:51 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:13:50.155 15:36:51 -- host/auth.sh@68 -- # keyid=3 00:13:50.155 15:36:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:50.155 15:36:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.155 15:36:51 -- common/autotest_common.sh@10 -- # set +x 00:13:50.155 15:36:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.155 15:36:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:50.155 15:36:51 -- nvmf/common.sh@717 -- # local ip 00:13:50.155 15:36:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:50.155 15:36:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:50.155 15:36:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:50.156 15:36:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:50.156 15:36:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:50.156 15:36:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:50.156 15:36:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:50.156 15:36:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:50.156 15:36:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:50.156 15:36:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:13:50.156 15:36:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.156 15:36:51 -- common/autotest_common.sh@10 -- # set +x 00:13:50.413 nvme0n1 00:13:50.413 15:36:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.413 15:36:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:50.413 15:36:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:50.413 15:36:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.413 15:36:51 -- common/autotest_common.sh@10 -- # set +x 00:13:50.413 15:36:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.413 15:36:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:50.413 15:36:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:50.413 15:36:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.413 15:36:51 -- common/autotest_common.sh@10 -- # set +x 00:13:50.672 15:36:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.672 15:36:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:50.672 15:36:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:13:50.672 15:36:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:50.672 15:36:51 -- host/auth.sh@44 -- # digest=sha384 00:13:50.672 15:36:51 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:13:50.672 15:36:51 -- host/auth.sh@44 -- # keyid=4 00:13:50.672 15:36:51 -- host/auth.sh@45 -- # key=DHHC-1:03:M2ViNjM0YzZmYzc2ZjdjMDhhZjg5NDA1N2NkOWYzZWFlMDhiNzMyNTA2OWVjZDllZTk2MGFlNjJkOTkyMjQ5MX6hZE4=: 00:13:50.672 15:36:51 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:13:50.672 15:36:51 -- host/auth.sh@48 -- # echo ffdhe6144 00:13:50.672 15:36:51 -- host/auth.sh@49 -- # echo DHHC-1:03:M2ViNjM0YzZmYzc2ZjdjMDhhZjg5NDA1N2NkOWYzZWFlMDhiNzMyNTA2OWVjZDllZTk2MGFlNjJkOTkyMjQ5MX6hZE4=: 00:13:50.672 15:36:51 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:13:50.672 15:36:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:50.672 15:36:51 -- host/auth.sh@68 -- # digest=sha384 00:13:50.672 15:36:51 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:13:50.672 15:36:51 -- host/auth.sh@68 -- # keyid=4 00:13:50.672 15:36:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:50.672 15:36:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.672 15:36:51 -- common/autotest_common.sh@10 -- # set +x 00:13:50.672 15:36:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.672 15:36:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:50.672 15:36:51 -- nvmf/common.sh@717 -- # local ip 00:13:50.672 15:36:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:50.672 15:36:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:50.672 15:36:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:50.672 15:36:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:50.672 15:36:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:50.672 15:36:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:50.672 15:36:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:50.672 15:36:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:50.672 15:36:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:50.672 15:36:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:13:50.672 15:36:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.672 15:36:51 -- common/autotest_common.sh@10 -- # set +x 00:13:50.930 nvme0n1 00:13:50.930 15:36:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.930 15:36:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:50.930 15:36:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:50.930 15:36:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.930 15:36:52 -- common/autotest_common.sh@10 -- # set +x 00:13:50.930 15:36:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.930 15:36:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:50.930 15:36:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:50.930 15:36:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.930 15:36:52 -- common/autotest_common.sh@10 -- # set +x 00:13:50.930 15:36:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.930 15:36:52 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:13:50.930 15:36:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:50.930 15:36:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:13:50.930 15:36:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:50.930 15:36:52 -- host/auth.sh@44 -- # digest=sha384 00:13:50.930 15:36:52 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:13:50.930 15:36:52 -- host/auth.sh@44 -- # keyid=0 00:13:50.930 15:36:52 -- host/auth.sh@45 -- # key=DHHC-1:00:NzRiZGY4NzU4Y2M3OTkyYWExZTc4NDVlMzZlMDBiMTZcy3CZ: 00:13:50.930 15:36:52 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:13:50.930 15:36:52 -- host/auth.sh@48 -- # echo ffdhe8192 00:13:50.930 15:36:52 -- host/auth.sh@49 -- # echo DHHC-1:00:NzRiZGY4NzU4Y2M3OTkyYWExZTc4NDVlMzZlMDBiMTZcy3CZ: 00:13:50.930 15:36:52 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:13:50.930 15:36:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:50.930 15:36:52 -- host/auth.sh@68 -- # digest=sha384 00:13:50.930 15:36:52 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:13:50.930 15:36:52 -- host/auth.sh@68 -- # keyid=0 00:13:50.930 15:36:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:50.930 15:36:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.930 15:36:52 -- common/autotest_common.sh@10 -- # set +x 00:13:50.930 15:36:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.930 15:36:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:50.930 15:36:52 -- nvmf/common.sh@717 -- # local ip 00:13:50.930 15:36:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:50.931 15:36:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:50.931 15:36:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:50.931 15:36:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:50.931 15:36:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:50.931 15:36:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:50.931 15:36:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:50.931 15:36:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:50.931 15:36:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:50.931 15:36:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:13:50.931 15:36:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.931 15:36:52 -- common/autotest_common.sh@10 -- # set +x 00:13:51.497 nvme0n1 00:13:51.497 15:36:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:51.497 15:36:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:51.497 15:36:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:51.497 15:36:52 -- common/autotest_common.sh@10 -- # set +x 00:13:51.497 15:36:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:51.497 15:36:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:51.755 15:36:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:51.755 15:36:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:51.755 15:36:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:51.755 15:36:52 -- common/autotest_common.sh@10 -- # set +x 00:13:51.755 15:36:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:51.755 15:36:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:51.755 15:36:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:13:51.755 15:36:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:51.755 15:36:52 -- host/auth.sh@44 -- # digest=sha384 00:13:51.755 15:36:52 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:13:51.755 15:36:52 -- host/auth.sh@44 -- # keyid=1 00:13:51.755 15:36:52 -- host/auth.sh@45 -- # key=DHHC-1:00:MjliZjQ0ZTBmN2QxYTg3N2VkMzcwYzk0NTJiZjcwNzhkMzYzNzExYTkxOGI5ZDc1TVXigg==: 00:13:51.755 15:36:52 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:13:51.755 15:36:52 -- host/auth.sh@48 -- # echo ffdhe8192 00:13:51.755 15:36:52 -- host/auth.sh@49 -- # echo DHHC-1:00:MjliZjQ0ZTBmN2QxYTg3N2VkMzcwYzk0NTJiZjcwNzhkMzYzNzExYTkxOGI5ZDc1TVXigg==: 00:13:51.755 15:36:52 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:13:51.755 15:36:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:51.755 15:36:52 -- host/auth.sh@68 -- # digest=sha384 00:13:51.755 15:36:52 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:13:51.755 15:36:52 -- host/auth.sh@68 -- # keyid=1 00:13:51.755 15:36:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:51.755 15:36:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:51.755 15:36:52 -- common/autotest_common.sh@10 -- # set +x 00:13:51.755 15:36:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:51.755 15:36:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:51.755 15:36:52 -- nvmf/common.sh@717 -- # local ip 00:13:51.755 15:36:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:51.755 15:36:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:51.755 15:36:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:51.755 15:36:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:51.755 15:36:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:51.755 15:36:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:51.755 15:36:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:51.755 15:36:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:51.755 15:36:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:51.755 15:36:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:13:51.755 15:36:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:51.755 15:36:53 -- common/autotest_common.sh@10 -- # set +x 00:13:52.333 nvme0n1 00:13:52.333 15:36:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:52.333 15:36:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:52.333 15:36:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:52.333 15:36:53 -- common/autotest_common.sh@10 -- # set +x 00:13:52.333 15:36:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:52.333 15:36:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:52.333 15:36:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:52.333 15:36:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:52.333 15:36:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:52.333 15:36:53 -- common/autotest_common.sh@10 -- # set +x 00:13:52.333 15:36:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:52.333 15:36:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:52.333 15:36:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:13:52.333 15:36:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:52.333 15:36:53 -- host/auth.sh@44 -- # digest=sha384 00:13:52.333 15:36:53 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:13:52.333 15:36:53 -- host/auth.sh@44 -- # keyid=2 00:13:52.333 15:36:53 -- host/auth.sh@45 -- # key=DHHC-1:01:YTU4NTIwZjAxZmZiYWVmNWNkMzUxYjM5YWYyZDljZWbmb8i5: 00:13:52.333 15:36:53 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:13:52.333 15:36:53 -- host/auth.sh@48 -- # echo ffdhe8192 00:13:52.333 15:36:53 -- host/auth.sh@49 -- # echo DHHC-1:01:YTU4NTIwZjAxZmZiYWVmNWNkMzUxYjM5YWYyZDljZWbmb8i5: 00:13:52.333 15:36:53 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:13:52.333 15:36:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:52.333 15:36:53 -- host/auth.sh@68 -- # digest=sha384 00:13:52.333 15:36:53 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:13:52.333 15:36:53 -- host/auth.sh@68 -- # keyid=2 00:13:52.333 15:36:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:52.333 15:36:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:52.333 15:36:53 -- common/autotest_common.sh@10 -- # set +x 00:13:52.333 15:36:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:52.333 15:36:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:52.333 15:36:53 -- nvmf/common.sh@717 -- # local ip 00:13:52.333 15:36:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:52.333 15:36:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:52.333 15:36:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:52.333 15:36:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:52.333 15:36:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:52.333 15:36:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:52.333 15:36:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:52.333 15:36:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:52.333 15:36:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:52.333 15:36:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:13:52.333 15:36:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:52.333 15:36:53 -- common/autotest_common.sh@10 -- # set +x 00:13:52.914 nvme0n1 00:13:52.914 15:36:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:52.914 15:36:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:52.914 15:36:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:52.914 15:36:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:52.914 15:36:54 -- common/autotest_common.sh@10 -- # set +x 00:13:52.914 15:36:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:53.173 15:36:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:53.173 15:36:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:53.173 15:36:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:53.173 15:36:54 -- common/autotest_common.sh@10 -- # set +x 00:13:53.173 15:36:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:53.173 15:36:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:53.173 15:36:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:13:53.173 15:36:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:53.173 15:36:54 -- host/auth.sh@44 -- # digest=sha384 00:13:53.173 15:36:54 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:13:53.173 15:36:54 -- host/auth.sh@44 -- # keyid=3 00:13:53.173 15:36:54 -- host/auth.sh@45 -- # key=DHHC-1:02:NDhhZTA0YmFkY2RlMTQ5NjQ3ZmQyZGNjNDUwNjFhMDExZmNjNDlhMGM3MDhkNDBj2LibMg==: 00:13:53.173 15:36:54 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:13:53.173 15:36:54 -- host/auth.sh@48 -- # echo ffdhe8192 00:13:53.173 15:36:54 -- host/auth.sh@49 -- # echo DHHC-1:02:NDhhZTA0YmFkY2RlMTQ5NjQ3ZmQyZGNjNDUwNjFhMDExZmNjNDlhMGM3MDhkNDBj2LibMg==: 00:13:53.173 15:36:54 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:13:53.173 15:36:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:53.173 15:36:54 -- host/auth.sh@68 -- # digest=sha384 00:13:53.173 15:36:54 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:13:53.173 15:36:54 -- host/auth.sh@68 -- # keyid=3 00:13:53.173 15:36:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:53.173 15:36:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:53.173 15:36:54 -- common/autotest_common.sh@10 -- # set +x 00:13:53.173 15:36:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:53.173 15:36:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:53.173 15:36:54 -- nvmf/common.sh@717 -- # local ip 00:13:53.173 15:36:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:53.173 15:36:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:53.173 15:36:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:53.173 15:36:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:53.173 15:36:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:53.173 15:36:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:53.173 15:36:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:53.173 15:36:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:53.173 15:36:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:53.173 15:36:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:13:53.173 15:36:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:53.173 15:36:54 -- common/autotest_common.sh@10 -- # set +x 00:13:53.738 nvme0n1 00:13:53.738 15:36:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:53.738 15:36:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:53.738 15:36:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:53.738 15:36:54 -- common/autotest_common.sh@10 -- # set +x 00:13:53.738 15:36:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:53.738 15:36:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:53.738 15:36:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:53.738 15:36:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:53.738 15:36:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:53.738 15:36:55 -- common/autotest_common.sh@10 -- # set +x 00:13:53.738 15:36:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:53.738 15:36:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:53.738 15:36:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:13:53.738 15:36:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:53.738 15:36:55 -- host/auth.sh@44 -- # digest=sha384 00:13:53.738 15:36:55 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:13:53.738 15:36:55 -- host/auth.sh@44 -- # keyid=4 00:13:53.738 15:36:55 -- host/auth.sh@45 -- # key=DHHC-1:03:M2ViNjM0YzZmYzc2ZjdjMDhhZjg5NDA1N2NkOWYzZWFlMDhiNzMyNTA2OWVjZDllZTk2MGFlNjJkOTkyMjQ5MX6hZE4=: 00:13:53.738 15:36:55 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:13:53.738 15:36:55 -- host/auth.sh@48 -- # echo ffdhe8192 00:13:53.738 15:36:55 -- host/auth.sh@49 -- # echo DHHC-1:03:M2ViNjM0YzZmYzc2ZjdjMDhhZjg5NDA1N2NkOWYzZWFlMDhiNzMyNTA2OWVjZDllZTk2MGFlNjJkOTkyMjQ5MX6hZE4=: 00:13:53.739 15:36:55 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:13:53.739 15:36:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:53.739 15:36:55 -- host/auth.sh@68 -- # digest=sha384 00:13:53.739 15:36:55 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:13:53.739 15:36:55 -- host/auth.sh@68 -- # keyid=4 00:13:53.739 15:36:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:53.739 15:36:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:53.739 15:36:55 -- common/autotest_common.sh@10 -- # set +x 00:13:53.739 15:36:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:53.739 15:36:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:53.739 15:36:55 -- nvmf/common.sh@717 -- # local ip 00:13:53.739 15:36:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:53.739 15:36:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:53.739 15:36:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:53.739 15:36:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:53.739 15:36:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:53.739 15:36:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:53.739 15:36:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:53.739 15:36:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:53.739 15:36:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:53.739 15:36:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:13:53.739 15:36:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:53.739 15:36:55 -- common/autotest_common.sh@10 -- # set +x 00:13:54.305 nvme0n1 00:13:54.305 15:36:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:54.305 15:36:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:54.305 15:36:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:54.305 15:36:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:54.305 15:36:55 -- common/autotest_common.sh@10 -- # set +x 00:13:54.305 15:36:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:54.305 15:36:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:54.305 15:36:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:54.305 15:36:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:54.305 15:36:55 -- common/autotest_common.sh@10 -- # set +x 00:13:54.305 15:36:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:54.564 15:36:55 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:13:54.564 15:36:55 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:13:54.564 15:36:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:54.564 15:36:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:13:54.564 15:36:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:54.564 15:36:55 -- host/auth.sh@44 -- # digest=sha512 00:13:54.564 15:36:55 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:13:54.564 15:36:55 -- host/auth.sh@44 -- # keyid=0 00:13:54.564 15:36:55 -- host/auth.sh@45 -- # key=DHHC-1:00:NzRiZGY4NzU4Y2M3OTkyYWExZTc4NDVlMzZlMDBiMTZcy3CZ: 00:13:54.564 15:36:55 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:13:54.564 15:36:55 -- host/auth.sh@48 -- # echo ffdhe2048 00:13:54.564 15:36:55 -- host/auth.sh@49 -- # echo DHHC-1:00:NzRiZGY4NzU4Y2M3OTkyYWExZTc4NDVlMzZlMDBiMTZcy3CZ: 00:13:54.564 15:36:55 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:13:54.564 15:36:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:54.564 15:36:55 -- host/auth.sh@68 -- # digest=sha512 00:13:54.564 15:36:55 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:13:54.564 15:36:55 -- host/auth.sh@68 -- # keyid=0 00:13:54.564 15:36:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:54.564 15:36:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:54.564 15:36:55 -- common/autotest_common.sh@10 -- # set +x 00:13:54.564 15:36:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:54.564 15:36:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:54.564 15:36:55 -- nvmf/common.sh@717 -- # local ip 00:13:54.564 15:36:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:54.564 15:36:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:54.564 15:36:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:54.564 15:36:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:54.564 15:36:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:54.564 15:36:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:54.564 15:36:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:54.564 15:36:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:54.564 15:36:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:54.564 15:36:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:13:54.564 15:36:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:54.564 15:36:55 -- common/autotest_common.sh@10 -- # set +x 00:13:54.564 nvme0n1 00:13:54.564 15:36:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:54.564 15:36:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:54.564 15:36:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:54.564 15:36:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:54.564 15:36:55 -- common/autotest_common.sh@10 -- # set +x 00:13:54.564 15:36:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:54.564 15:36:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:54.564 15:36:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:54.564 15:36:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:54.564 15:36:55 -- common/autotest_common.sh@10 -- # set +x 00:13:54.564 15:36:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:54.564 15:36:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:54.564 15:36:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:13:54.564 15:36:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:54.564 15:36:55 -- host/auth.sh@44 -- # digest=sha512 00:13:54.564 15:36:55 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:13:54.564 15:36:55 -- host/auth.sh@44 -- # keyid=1 00:13:54.564 15:36:55 -- host/auth.sh@45 -- # key=DHHC-1:00:MjliZjQ0ZTBmN2QxYTg3N2VkMzcwYzk0NTJiZjcwNzhkMzYzNzExYTkxOGI5ZDc1TVXigg==: 00:13:54.564 15:36:55 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:13:54.564 15:36:55 -- host/auth.sh@48 -- # echo ffdhe2048 00:13:54.564 15:36:55 -- host/auth.sh@49 -- # echo DHHC-1:00:MjliZjQ0ZTBmN2QxYTg3N2VkMzcwYzk0NTJiZjcwNzhkMzYzNzExYTkxOGI5ZDc1TVXigg==: 00:13:54.564 15:36:55 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:13:54.564 15:36:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:54.564 15:36:55 -- host/auth.sh@68 -- # digest=sha512 00:13:54.564 15:36:55 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:13:54.564 15:36:55 -- host/auth.sh@68 -- # keyid=1 00:13:54.564 15:36:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:54.564 15:36:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:54.564 15:36:55 -- common/autotest_common.sh@10 -- # set +x 00:13:54.565 15:36:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:54.565 15:36:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:54.565 15:36:55 -- nvmf/common.sh@717 -- # local ip 00:13:54.565 15:36:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:54.565 15:36:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:54.565 15:36:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:54.565 15:36:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:54.565 15:36:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:54.565 15:36:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:54.565 15:36:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:54.565 15:36:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:54.565 15:36:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:54.565 15:36:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:13:54.565 15:36:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:54.565 15:36:55 -- common/autotest_common.sh@10 -- # set +x 00:13:54.823 nvme0n1 00:13:54.823 15:36:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:54.823 15:36:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:54.823 15:36:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:54.823 15:36:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:54.823 15:36:56 -- common/autotest_common.sh@10 -- # set +x 00:13:54.823 15:36:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:54.823 15:36:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:54.823 15:36:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:54.823 15:36:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:54.823 15:36:56 -- common/autotest_common.sh@10 -- # set +x 00:13:54.823 15:36:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:54.823 15:36:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:54.823 15:36:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:13:54.823 15:36:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:54.823 15:36:56 -- host/auth.sh@44 -- # digest=sha512 00:13:54.823 15:36:56 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:13:54.823 15:36:56 -- host/auth.sh@44 -- # keyid=2 00:13:54.823 15:36:56 -- host/auth.sh@45 -- # key=DHHC-1:01:YTU4NTIwZjAxZmZiYWVmNWNkMzUxYjM5YWYyZDljZWbmb8i5: 00:13:54.823 15:36:56 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:13:54.823 15:36:56 -- host/auth.sh@48 -- # echo ffdhe2048 00:13:54.823 15:36:56 -- host/auth.sh@49 -- # echo DHHC-1:01:YTU4NTIwZjAxZmZiYWVmNWNkMzUxYjM5YWYyZDljZWbmb8i5: 00:13:54.823 15:36:56 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:13:54.823 15:36:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:54.823 15:36:56 -- host/auth.sh@68 -- # digest=sha512 00:13:54.823 15:36:56 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:13:54.823 15:36:56 -- host/auth.sh@68 -- # keyid=2 00:13:54.823 15:36:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:54.823 15:36:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:54.823 15:36:56 -- common/autotest_common.sh@10 -- # set +x 00:13:54.823 15:36:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:54.823 15:36:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:54.823 15:36:56 -- nvmf/common.sh@717 -- # local ip 00:13:54.823 15:36:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:54.823 15:36:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:54.823 15:36:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:54.823 15:36:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:54.823 15:36:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:54.823 15:36:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:54.823 15:36:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:54.823 15:36:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:54.823 15:36:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:54.823 15:36:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:13:54.823 15:36:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:54.823 15:36:56 -- common/autotest_common.sh@10 -- # set +x 00:13:54.823 nvme0n1 00:13:54.823 15:36:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:54.823 15:36:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:54.823 15:36:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:54.823 15:36:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:54.823 15:36:56 -- common/autotest_common.sh@10 -- # set +x 00:13:54.823 15:36:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:54.823 15:36:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:54.823 15:36:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:54.823 15:36:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:54.823 15:36:56 -- common/autotest_common.sh@10 -- # set +x 00:13:55.086 15:36:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:55.086 15:36:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:55.086 15:36:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:13:55.086 15:36:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:55.086 15:36:56 -- host/auth.sh@44 -- # digest=sha512 00:13:55.086 15:36:56 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:13:55.086 15:36:56 -- host/auth.sh@44 -- # keyid=3 00:13:55.086 15:36:56 -- host/auth.sh@45 -- # key=DHHC-1:02:NDhhZTA0YmFkY2RlMTQ5NjQ3ZmQyZGNjNDUwNjFhMDExZmNjNDlhMGM3MDhkNDBj2LibMg==: 00:13:55.086 15:36:56 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:13:55.086 15:36:56 -- host/auth.sh@48 -- # echo ffdhe2048 00:13:55.086 15:36:56 -- host/auth.sh@49 -- # echo DHHC-1:02:NDhhZTA0YmFkY2RlMTQ5NjQ3ZmQyZGNjNDUwNjFhMDExZmNjNDlhMGM3MDhkNDBj2LibMg==: 00:13:55.086 15:36:56 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:13:55.086 15:36:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:55.086 15:36:56 -- host/auth.sh@68 -- # digest=sha512 00:13:55.086 15:36:56 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:13:55.086 15:36:56 -- host/auth.sh@68 -- # keyid=3 00:13:55.086 15:36:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:55.086 15:36:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:55.086 15:36:56 -- common/autotest_common.sh@10 -- # set +x 00:13:55.086 15:36:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:55.086 15:36:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:55.086 15:36:56 -- nvmf/common.sh@717 -- # local ip 00:13:55.086 15:36:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:55.086 15:36:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:55.086 15:36:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:55.086 15:36:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:55.086 15:36:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:55.086 15:36:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:55.086 15:36:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:55.086 15:36:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:55.086 15:36:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:55.086 15:36:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:13:55.086 15:36:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:55.086 15:36:56 -- common/autotest_common.sh@10 -- # set +x 00:13:55.086 nvme0n1 00:13:55.086 15:36:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:55.086 15:36:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:55.086 15:36:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:55.086 15:36:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:55.086 15:36:56 -- common/autotest_common.sh@10 -- # set +x 00:13:55.086 15:36:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:55.086 15:36:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.086 15:36:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:55.086 15:36:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:55.086 15:36:56 -- common/autotest_common.sh@10 -- # set +x 00:13:55.086 15:36:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:55.086 15:36:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:55.086 15:36:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:13:55.086 15:36:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:55.086 15:36:56 -- host/auth.sh@44 -- # digest=sha512 00:13:55.086 15:36:56 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:13:55.086 15:36:56 -- host/auth.sh@44 -- # keyid=4 00:13:55.086 15:36:56 -- host/auth.sh@45 -- # key=DHHC-1:03:M2ViNjM0YzZmYzc2ZjdjMDhhZjg5NDA1N2NkOWYzZWFlMDhiNzMyNTA2OWVjZDllZTk2MGFlNjJkOTkyMjQ5MX6hZE4=: 00:13:55.086 15:36:56 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:13:55.086 15:36:56 -- host/auth.sh@48 -- # echo ffdhe2048 00:13:55.086 15:36:56 -- host/auth.sh@49 -- # echo DHHC-1:03:M2ViNjM0YzZmYzc2ZjdjMDhhZjg5NDA1N2NkOWYzZWFlMDhiNzMyNTA2OWVjZDllZTk2MGFlNjJkOTkyMjQ5MX6hZE4=: 00:13:55.086 15:36:56 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:13:55.086 15:36:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:55.086 15:36:56 -- host/auth.sh@68 -- # digest=sha512 00:13:55.086 15:36:56 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:13:55.086 15:36:56 -- host/auth.sh@68 -- # keyid=4 00:13:55.086 15:36:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:55.086 15:36:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:55.086 15:36:56 -- common/autotest_common.sh@10 -- # set +x 00:13:55.086 15:36:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:55.086 15:36:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:55.086 15:36:56 -- nvmf/common.sh@717 -- # local ip 00:13:55.086 15:36:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:55.086 15:36:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:55.086 15:36:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:55.086 15:36:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:55.086 15:36:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:55.086 15:36:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:55.086 15:36:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:55.086 15:36:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:55.086 15:36:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:55.086 15:36:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:13:55.086 15:36:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:55.086 15:36:56 -- common/autotest_common.sh@10 -- # set +x 00:13:55.345 nvme0n1 00:13:55.345 15:36:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:55.345 15:36:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:55.345 15:36:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:55.345 15:36:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:55.345 15:36:56 -- common/autotest_common.sh@10 -- # set +x 00:13:55.345 15:36:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:55.345 15:36:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.345 15:36:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:55.345 15:36:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:55.345 15:36:56 -- common/autotest_common.sh@10 -- # set +x 00:13:55.345 15:36:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:55.345 15:36:56 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:13:55.345 15:36:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:55.345 15:36:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:13:55.345 15:36:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:55.345 15:36:56 -- host/auth.sh@44 -- # digest=sha512 00:13:55.345 15:36:56 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:13:55.345 15:36:56 -- host/auth.sh@44 -- # keyid=0 00:13:55.345 15:36:56 -- host/auth.sh@45 -- # key=DHHC-1:00:NzRiZGY4NzU4Y2M3OTkyYWExZTc4NDVlMzZlMDBiMTZcy3CZ: 00:13:55.345 15:36:56 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:13:55.345 15:36:56 -- host/auth.sh@48 -- # echo ffdhe3072 00:13:55.345 15:36:56 -- host/auth.sh@49 -- # echo DHHC-1:00:NzRiZGY4NzU4Y2M3OTkyYWExZTc4NDVlMzZlMDBiMTZcy3CZ: 00:13:55.345 15:36:56 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:13:55.345 15:36:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:55.345 15:36:56 -- host/auth.sh@68 -- # digest=sha512 00:13:55.345 15:36:56 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:13:55.345 15:36:56 -- host/auth.sh@68 -- # keyid=0 00:13:55.345 15:36:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:55.345 15:36:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:55.345 15:36:56 -- common/autotest_common.sh@10 -- # set +x 00:13:55.345 15:36:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:55.345 15:36:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:55.345 15:36:56 -- nvmf/common.sh@717 -- # local ip 00:13:55.345 15:36:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:55.345 15:36:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:55.345 15:36:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:55.345 15:36:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:55.345 15:36:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:55.345 15:36:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:55.345 15:36:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:55.345 15:36:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:55.345 15:36:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:55.345 15:36:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:13:55.345 15:36:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:55.345 15:36:56 -- common/autotest_common.sh@10 -- # set +x 00:13:55.345 nvme0n1 00:13:55.345 15:36:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:55.345 15:36:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:55.345 15:36:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:55.345 15:36:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:55.345 15:36:56 -- common/autotest_common.sh@10 -- # set +x 00:13:55.345 15:36:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:55.603 15:36:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.603 15:36:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:55.603 15:36:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:55.603 15:36:56 -- common/autotest_common.sh@10 -- # set +x 00:13:55.603 15:36:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:55.603 15:36:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:55.603 15:36:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:13:55.603 15:36:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:55.603 15:36:56 -- host/auth.sh@44 -- # digest=sha512 00:13:55.603 15:36:56 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:13:55.603 15:36:56 -- host/auth.sh@44 -- # keyid=1 00:13:55.603 15:36:56 -- host/auth.sh@45 -- # key=DHHC-1:00:MjliZjQ0ZTBmN2QxYTg3N2VkMzcwYzk0NTJiZjcwNzhkMzYzNzExYTkxOGI5ZDc1TVXigg==: 00:13:55.603 15:36:56 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:13:55.603 15:36:56 -- host/auth.sh@48 -- # echo ffdhe3072 00:13:55.603 15:36:56 -- host/auth.sh@49 -- # echo DHHC-1:00:MjliZjQ0ZTBmN2QxYTg3N2VkMzcwYzk0NTJiZjcwNzhkMzYzNzExYTkxOGI5ZDc1TVXigg==: 00:13:55.603 15:36:56 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:13:55.603 15:36:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:55.603 15:36:56 -- host/auth.sh@68 -- # digest=sha512 00:13:55.603 15:36:56 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:13:55.603 15:36:56 -- host/auth.sh@68 -- # keyid=1 00:13:55.604 15:36:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:55.604 15:36:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:55.604 15:36:56 -- common/autotest_common.sh@10 -- # set +x 00:13:55.604 15:36:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:55.604 15:36:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:55.604 15:36:56 -- nvmf/common.sh@717 -- # local ip 00:13:55.604 15:36:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:55.604 15:36:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:55.604 15:36:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:55.604 15:36:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:55.604 15:36:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:55.604 15:36:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:55.604 15:36:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:55.604 15:36:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:55.604 15:36:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:55.604 15:36:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:13:55.604 15:36:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:55.604 15:36:56 -- common/autotest_common.sh@10 -- # set +x 00:13:55.604 nvme0n1 00:13:55.604 15:36:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:55.604 15:36:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:55.604 15:36:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:55.604 15:36:56 -- common/autotest_common.sh@10 -- # set +x 00:13:55.604 15:36:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:55.604 15:36:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:55.604 15:36:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.604 15:36:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:55.604 15:36:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:55.604 15:36:57 -- common/autotest_common.sh@10 -- # set +x 00:13:55.604 15:36:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:55.604 15:36:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:55.604 15:36:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:13:55.604 15:36:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:55.604 15:36:57 -- host/auth.sh@44 -- # digest=sha512 00:13:55.604 15:36:57 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:13:55.604 15:36:57 -- host/auth.sh@44 -- # keyid=2 00:13:55.604 15:36:57 -- host/auth.sh@45 -- # key=DHHC-1:01:YTU4NTIwZjAxZmZiYWVmNWNkMzUxYjM5YWYyZDljZWbmb8i5: 00:13:55.604 15:36:57 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:13:55.604 15:36:57 -- host/auth.sh@48 -- # echo ffdhe3072 00:13:55.604 15:36:57 -- host/auth.sh@49 -- # echo DHHC-1:01:YTU4NTIwZjAxZmZiYWVmNWNkMzUxYjM5YWYyZDljZWbmb8i5: 00:13:55.604 15:36:57 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:13:55.604 15:36:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:55.604 15:36:57 -- host/auth.sh@68 -- # digest=sha512 00:13:55.604 15:36:57 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:13:55.604 15:36:57 -- host/auth.sh@68 -- # keyid=2 00:13:55.604 15:36:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:55.604 15:36:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:55.604 15:36:57 -- common/autotest_common.sh@10 -- # set +x 00:13:55.604 15:36:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:55.862 15:36:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:55.862 15:36:57 -- nvmf/common.sh@717 -- # local ip 00:13:55.862 15:36:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:55.862 15:36:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:55.862 15:36:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:55.862 15:36:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:55.862 15:36:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:55.862 15:36:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:55.862 15:36:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:55.862 15:36:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:55.862 15:36:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:55.862 15:36:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:13:55.862 15:36:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:55.862 15:36:57 -- common/autotest_common.sh@10 -- # set +x 00:13:55.862 nvme0n1 00:13:55.862 15:36:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:55.862 15:36:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:55.862 15:36:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:55.862 15:36:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:55.862 15:36:57 -- common/autotest_common.sh@10 -- # set +x 00:13:55.862 15:36:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:55.862 15:36:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.862 15:36:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:55.862 15:36:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:55.862 15:36:57 -- common/autotest_common.sh@10 -- # set +x 00:13:55.862 15:36:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:55.862 15:36:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:55.862 15:36:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:13:55.862 15:36:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:55.862 15:36:57 -- host/auth.sh@44 -- # digest=sha512 00:13:55.862 15:36:57 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:13:55.862 15:36:57 -- host/auth.sh@44 -- # keyid=3 00:13:55.862 15:36:57 -- host/auth.sh@45 -- # key=DHHC-1:02:NDhhZTA0YmFkY2RlMTQ5NjQ3ZmQyZGNjNDUwNjFhMDExZmNjNDlhMGM3MDhkNDBj2LibMg==: 00:13:55.862 15:36:57 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:13:55.862 15:36:57 -- host/auth.sh@48 -- # echo ffdhe3072 00:13:55.863 15:36:57 -- host/auth.sh@49 -- # echo DHHC-1:02:NDhhZTA0YmFkY2RlMTQ5NjQ3ZmQyZGNjNDUwNjFhMDExZmNjNDlhMGM3MDhkNDBj2LibMg==: 00:13:55.863 15:36:57 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:13:55.863 15:36:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:55.863 15:36:57 -- host/auth.sh@68 -- # digest=sha512 00:13:55.863 15:36:57 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:13:55.863 15:36:57 -- host/auth.sh@68 -- # keyid=3 00:13:55.863 15:36:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:55.863 15:36:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:55.863 15:36:57 -- common/autotest_common.sh@10 -- # set +x 00:13:55.863 15:36:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:55.863 15:36:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:55.863 15:36:57 -- nvmf/common.sh@717 -- # local ip 00:13:55.863 15:36:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:55.863 15:36:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:55.863 15:36:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:55.863 15:36:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:55.863 15:36:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:55.863 15:36:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:55.863 15:36:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:55.863 15:36:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:55.863 15:36:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:55.863 15:36:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:13:55.863 15:36:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:55.863 15:36:57 -- common/autotest_common.sh@10 -- # set +x 00:13:56.121 nvme0n1 00:13:56.121 15:36:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:56.121 15:36:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:56.121 15:36:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:56.121 15:36:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:56.121 15:36:57 -- common/autotest_common.sh@10 -- # set +x 00:13:56.121 15:36:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:56.121 15:36:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:56.121 15:36:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:56.121 15:36:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:56.121 15:36:57 -- common/autotest_common.sh@10 -- # set +x 00:13:56.121 15:36:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:56.121 15:36:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:56.121 15:36:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:13:56.121 15:36:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:56.121 15:36:57 -- host/auth.sh@44 -- # digest=sha512 00:13:56.121 15:36:57 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:13:56.122 15:36:57 -- host/auth.sh@44 -- # keyid=4 00:13:56.122 15:36:57 -- host/auth.sh@45 -- # key=DHHC-1:03:M2ViNjM0YzZmYzc2ZjdjMDhhZjg5NDA1N2NkOWYzZWFlMDhiNzMyNTA2OWVjZDllZTk2MGFlNjJkOTkyMjQ5MX6hZE4=: 00:13:56.122 15:36:57 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:13:56.122 15:36:57 -- host/auth.sh@48 -- # echo ffdhe3072 00:13:56.122 15:36:57 -- host/auth.sh@49 -- # echo DHHC-1:03:M2ViNjM0YzZmYzc2ZjdjMDhhZjg5NDA1N2NkOWYzZWFlMDhiNzMyNTA2OWVjZDllZTk2MGFlNjJkOTkyMjQ5MX6hZE4=: 00:13:56.122 15:36:57 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:13:56.122 15:36:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:56.122 15:36:57 -- host/auth.sh@68 -- # digest=sha512 00:13:56.122 15:36:57 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:13:56.122 15:36:57 -- host/auth.sh@68 -- # keyid=4 00:13:56.122 15:36:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:56.122 15:36:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:56.122 15:36:57 -- common/autotest_common.sh@10 -- # set +x 00:13:56.122 15:36:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:56.122 15:36:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:56.122 15:36:57 -- nvmf/common.sh@717 -- # local ip 00:13:56.122 15:36:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:56.122 15:36:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:56.122 15:36:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:56.122 15:36:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:56.122 15:36:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:56.122 15:36:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:56.122 15:36:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:56.122 15:36:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:56.122 15:36:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:56.122 15:36:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:13:56.122 15:36:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:56.122 15:36:57 -- common/autotest_common.sh@10 -- # set +x 00:13:56.406 nvme0n1 00:13:56.406 15:36:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:56.406 15:36:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:56.406 15:36:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:56.406 15:36:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:56.406 15:36:57 -- common/autotest_common.sh@10 -- # set +x 00:13:56.406 15:36:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:56.406 15:36:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:56.406 15:36:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:56.406 15:36:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:56.406 15:36:57 -- common/autotest_common.sh@10 -- # set +x 00:13:56.406 15:36:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:56.406 15:36:57 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:13:56.406 15:36:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:56.406 15:36:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:13:56.406 15:36:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:56.406 15:36:57 -- host/auth.sh@44 -- # digest=sha512 00:13:56.406 15:36:57 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:13:56.406 15:36:57 -- host/auth.sh@44 -- # keyid=0 00:13:56.406 15:36:57 -- host/auth.sh@45 -- # key=DHHC-1:00:NzRiZGY4NzU4Y2M3OTkyYWExZTc4NDVlMzZlMDBiMTZcy3CZ: 00:13:56.406 15:36:57 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:13:56.406 15:36:57 -- host/auth.sh@48 -- # echo ffdhe4096 00:13:56.406 15:36:57 -- host/auth.sh@49 -- # echo DHHC-1:00:NzRiZGY4NzU4Y2M3OTkyYWExZTc4NDVlMzZlMDBiMTZcy3CZ: 00:13:56.406 15:36:57 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:13:56.406 15:36:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:56.406 15:36:57 -- host/auth.sh@68 -- # digest=sha512 00:13:56.406 15:36:57 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:13:56.406 15:36:57 -- host/auth.sh@68 -- # keyid=0 00:13:56.406 15:36:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:56.406 15:36:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:56.406 15:36:57 -- common/autotest_common.sh@10 -- # set +x 00:13:56.406 15:36:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:56.406 15:36:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:56.406 15:36:57 -- nvmf/common.sh@717 -- # local ip 00:13:56.406 15:36:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:56.406 15:36:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:56.406 15:36:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:56.406 15:36:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:56.406 15:36:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:56.406 15:36:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:56.406 15:36:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:56.406 15:36:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:56.406 15:36:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:56.406 15:36:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:13:56.406 15:36:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:56.406 15:36:57 -- common/autotest_common.sh@10 -- # set +x 00:13:56.406 nvme0n1 00:13:56.406 15:36:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:56.664 15:36:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:56.664 15:36:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:56.664 15:36:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:56.664 15:36:57 -- common/autotest_common.sh@10 -- # set +x 00:13:56.664 15:36:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:56.664 15:36:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:56.664 15:36:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:56.664 15:36:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:56.664 15:36:57 -- common/autotest_common.sh@10 -- # set +x 00:13:56.664 15:36:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:56.664 15:36:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:56.664 15:36:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:13:56.664 15:36:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:56.664 15:36:57 -- host/auth.sh@44 -- # digest=sha512 00:13:56.664 15:36:57 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:13:56.664 15:36:57 -- host/auth.sh@44 -- # keyid=1 00:13:56.664 15:36:57 -- host/auth.sh@45 -- # key=DHHC-1:00:MjliZjQ0ZTBmN2QxYTg3N2VkMzcwYzk0NTJiZjcwNzhkMzYzNzExYTkxOGI5ZDc1TVXigg==: 00:13:56.664 15:36:57 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:13:56.664 15:36:57 -- host/auth.sh@48 -- # echo ffdhe4096 00:13:56.664 15:36:57 -- host/auth.sh@49 -- # echo DHHC-1:00:MjliZjQ0ZTBmN2QxYTg3N2VkMzcwYzk0NTJiZjcwNzhkMzYzNzExYTkxOGI5ZDc1TVXigg==: 00:13:56.664 15:36:57 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:13:56.664 15:36:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:56.664 15:36:57 -- host/auth.sh@68 -- # digest=sha512 00:13:56.664 15:36:57 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:13:56.664 15:36:57 -- host/auth.sh@68 -- # keyid=1 00:13:56.664 15:36:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:56.664 15:36:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:56.664 15:36:57 -- common/autotest_common.sh@10 -- # set +x 00:13:56.664 15:36:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:56.664 15:36:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:56.664 15:36:57 -- nvmf/common.sh@717 -- # local ip 00:13:56.664 15:36:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:56.664 15:36:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:56.664 15:36:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:56.664 15:36:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:56.664 15:36:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:56.664 15:36:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:56.664 15:36:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:56.664 15:36:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:56.664 15:36:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:56.664 15:36:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:13:56.664 15:36:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:56.664 15:36:57 -- common/autotest_common.sh@10 -- # set +x 00:13:56.923 nvme0n1 00:13:56.923 15:36:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:56.923 15:36:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:56.923 15:36:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:56.923 15:36:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:56.923 15:36:58 -- common/autotest_common.sh@10 -- # set +x 00:13:56.923 15:36:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:56.923 15:36:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:56.923 15:36:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:56.923 15:36:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:56.923 15:36:58 -- common/autotest_common.sh@10 -- # set +x 00:13:56.923 15:36:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:56.923 15:36:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:56.923 15:36:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:13:56.923 15:36:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:56.923 15:36:58 -- host/auth.sh@44 -- # digest=sha512 00:13:56.923 15:36:58 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:13:56.923 15:36:58 -- host/auth.sh@44 -- # keyid=2 00:13:56.923 15:36:58 -- host/auth.sh@45 -- # key=DHHC-1:01:YTU4NTIwZjAxZmZiYWVmNWNkMzUxYjM5YWYyZDljZWbmb8i5: 00:13:56.923 15:36:58 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:13:56.923 15:36:58 -- host/auth.sh@48 -- # echo ffdhe4096 00:13:56.923 15:36:58 -- host/auth.sh@49 -- # echo DHHC-1:01:YTU4NTIwZjAxZmZiYWVmNWNkMzUxYjM5YWYyZDljZWbmb8i5: 00:13:56.923 15:36:58 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:13:56.923 15:36:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:56.923 15:36:58 -- host/auth.sh@68 -- # digest=sha512 00:13:56.923 15:36:58 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:13:56.923 15:36:58 -- host/auth.sh@68 -- # keyid=2 00:13:56.923 15:36:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:56.923 15:36:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:56.923 15:36:58 -- common/autotest_common.sh@10 -- # set +x 00:13:56.923 15:36:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:56.923 15:36:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:56.923 15:36:58 -- nvmf/common.sh@717 -- # local ip 00:13:56.923 15:36:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:56.923 15:36:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:56.923 15:36:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:56.923 15:36:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:56.923 15:36:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:56.923 15:36:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:56.923 15:36:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:56.923 15:36:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:56.923 15:36:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:56.923 15:36:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:13:56.923 15:36:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:56.923 15:36:58 -- common/autotest_common.sh@10 -- # set +x 00:13:57.183 nvme0n1 00:13:57.183 15:36:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:57.183 15:36:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:57.183 15:36:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:57.183 15:36:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:57.183 15:36:58 -- common/autotest_common.sh@10 -- # set +x 00:13:57.183 15:36:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:57.183 15:36:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:57.183 15:36:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:57.183 15:36:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:57.183 15:36:58 -- common/autotest_common.sh@10 -- # set +x 00:13:57.183 15:36:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:57.183 15:36:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:57.183 15:36:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:13:57.183 15:36:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:57.183 15:36:58 -- host/auth.sh@44 -- # digest=sha512 00:13:57.183 15:36:58 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:13:57.183 15:36:58 -- host/auth.sh@44 -- # keyid=3 00:13:57.183 15:36:58 -- host/auth.sh@45 -- # key=DHHC-1:02:NDhhZTA0YmFkY2RlMTQ5NjQ3ZmQyZGNjNDUwNjFhMDExZmNjNDlhMGM3MDhkNDBj2LibMg==: 00:13:57.183 15:36:58 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:13:57.183 15:36:58 -- host/auth.sh@48 -- # echo ffdhe4096 00:13:57.183 15:36:58 -- host/auth.sh@49 -- # echo DHHC-1:02:NDhhZTA0YmFkY2RlMTQ5NjQ3ZmQyZGNjNDUwNjFhMDExZmNjNDlhMGM3MDhkNDBj2LibMg==: 00:13:57.183 15:36:58 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:13:57.183 15:36:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:57.183 15:36:58 -- host/auth.sh@68 -- # digest=sha512 00:13:57.183 15:36:58 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:13:57.183 15:36:58 -- host/auth.sh@68 -- # keyid=3 00:13:57.183 15:36:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:57.183 15:36:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:57.183 15:36:58 -- common/autotest_common.sh@10 -- # set +x 00:13:57.183 15:36:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:57.183 15:36:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:57.183 15:36:58 -- nvmf/common.sh@717 -- # local ip 00:13:57.183 15:36:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:57.183 15:36:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:57.183 15:36:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:57.183 15:36:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:57.183 15:36:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:57.183 15:36:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:57.183 15:36:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:57.183 15:36:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:57.183 15:36:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:57.183 15:36:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:13:57.183 15:36:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:57.183 15:36:58 -- common/autotest_common.sh@10 -- # set +x 00:13:57.442 nvme0n1 00:13:57.442 15:36:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:57.442 15:36:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:57.442 15:36:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:57.442 15:36:58 -- common/autotest_common.sh@10 -- # set +x 00:13:57.442 15:36:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:57.442 15:36:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:57.442 15:36:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:57.442 15:36:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:57.442 15:36:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:57.442 15:36:58 -- common/autotest_common.sh@10 -- # set +x 00:13:57.442 15:36:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:57.442 15:36:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:57.442 15:36:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:13:57.442 15:36:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:57.442 15:36:58 -- host/auth.sh@44 -- # digest=sha512 00:13:57.442 15:36:58 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:13:57.442 15:36:58 -- host/auth.sh@44 -- # keyid=4 00:13:57.442 15:36:58 -- host/auth.sh@45 -- # key=DHHC-1:03:M2ViNjM0YzZmYzc2ZjdjMDhhZjg5NDA1N2NkOWYzZWFlMDhiNzMyNTA2OWVjZDllZTk2MGFlNjJkOTkyMjQ5MX6hZE4=: 00:13:57.442 15:36:58 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:13:57.442 15:36:58 -- host/auth.sh@48 -- # echo ffdhe4096 00:13:57.442 15:36:58 -- host/auth.sh@49 -- # echo DHHC-1:03:M2ViNjM0YzZmYzc2ZjdjMDhhZjg5NDA1N2NkOWYzZWFlMDhiNzMyNTA2OWVjZDllZTk2MGFlNjJkOTkyMjQ5MX6hZE4=: 00:13:57.442 15:36:58 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:13:57.442 15:36:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:57.442 15:36:58 -- host/auth.sh@68 -- # digest=sha512 00:13:57.442 15:36:58 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:13:57.442 15:36:58 -- host/auth.sh@68 -- # keyid=4 00:13:57.442 15:36:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:57.442 15:36:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:57.442 15:36:58 -- common/autotest_common.sh@10 -- # set +x 00:13:57.442 15:36:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:57.442 15:36:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:57.442 15:36:58 -- nvmf/common.sh@717 -- # local ip 00:13:57.442 15:36:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:57.442 15:36:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:57.442 15:36:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:57.442 15:36:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:57.442 15:36:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:57.442 15:36:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:57.442 15:36:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:57.442 15:36:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:57.442 15:36:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:57.442 15:36:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:13:57.442 15:36:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:57.442 15:36:58 -- common/autotest_common.sh@10 -- # set +x 00:13:57.701 nvme0n1 00:13:57.701 15:36:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:57.701 15:36:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:57.701 15:36:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:57.701 15:36:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:57.701 15:36:58 -- common/autotest_common.sh@10 -- # set +x 00:13:57.701 15:36:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:57.701 15:36:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:57.701 15:36:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:57.701 15:36:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:57.701 15:36:59 -- common/autotest_common.sh@10 -- # set +x 00:13:57.701 15:36:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:57.701 15:36:59 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:13:57.701 15:36:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:57.701 15:36:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:13:57.701 15:36:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:57.701 15:36:59 -- host/auth.sh@44 -- # digest=sha512 00:13:57.701 15:36:59 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:13:57.701 15:36:59 -- host/auth.sh@44 -- # keyid=0 00:13:57.701 15:36:59 -- host/auth.sh@45 -- # key=DHHC-1:00:NzRiZGY4NzU4Y2M3OTkyYWExZTc4NDVlMzZlMDBiMTZcy3CZ: 00:13:57.701 15:36:59 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:13:57.701 15:36:59 -- host/auth.sh@48 -- # echo ffdhe6144 00:13:57.701 15:36:59 -- host/auth.sh@49 -- # echo DHHC-1:00:NzRiZGY4NzU4Y2M3OTkyYWExZTc4NDVlMzZlMDBiMTZcy3CZ: 00:13:57.701 15:36:59 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:13:57.701 15:36:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:57.701 15:36:59 -- host/auth.sh@68 -- # digest=sha512 00:13:57.701 15:36:59 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:13:57.701 15:36:59 -- host/auth.sh@68 -- # keyid=0 00:13:57.701 15:36:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:57.701 15:36:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:57.701 15:36:59 -- common/autotest_common.sh@10 -- # set +x 00:13:57.701 15:36:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:57.701 15:36:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:57.701 15:36:59 -- nvmf/common.sh@717 -- # local ip 00:13:57.701 15:36:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:57.701 15:36:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:57.701 15:36:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:57.701 15:36:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:57.701 15:36:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:57.701 15:36:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:57.701 15:36:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:57.701 15:36:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:57.701 15:36:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:57.701 15:36:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:13:57.701 15:36:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:57.701 15:36:59 -- common/autotest_common.sh@10 -- # set +x 00:13:58.267 nvme0n1 00:13:58.267 15:36:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:58.267 15:36:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:58.267 15:36:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:58.267 15:36:59 -- common/autotest_common.sh@10 -- # set +x 00:13:58.267 15:36:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:58.267 15:36:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:58.267 15:36:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:58.267 15:36:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:58.267 15:36:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:58.267 15:36:59 -- common/autotest_common.sh@10 -- # set +x 00:13:58.267 15:36:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:58.267 15:36:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:58.267 15:36:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:13:58.267 15:36:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:58.267 15:36:59 -- host/auth.sh@44 -- # digest=sha512 00:13:58.267 15:36:59 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:13:58.267 15:36:59 -- host/auth.sh@44 -- # keyid=1 00:13:58.267 15:36:59 -- host/auth.sh@45 -- # key=DHHC-1:00:MjliZjQ0ZTBmN2QxYTg3N2VkMzcwYzk0NTJiZjcwNzhkMzYzNzExYTkxOGI5ZDc1TVXigg==: 00:13:58.267 15:36:59 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:13:58.267 15:36:59 -- host/auth.sh@48 -- # echo ffdhe6144 00:13:58.267 15:36:59 -- host/auth.sh@49 -- # echo DHHC-1:00:MjliZjQ0ZTBmN2QxYTg3N2VkMzcwYzk0NTJiZjcwNzhkMzYzNzExYTkxOGI5ZDc1TVXigg==: 00:13:58.267 15:36:59 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:13:58.267 15:36:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:58.267 15:36:59 -- host/auth.sh@68 -- # digest=sha512 00:13:58.267 15:36:59 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:13:58.267 15:36:59 -- host/auth.sh@68 -- # keyid=1 00:13:58.267 15:36:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:58.267 15:36:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:58.267 15:36:59 -- common/autotest_common.sh@10 -- # set +x 00:13:58.267 15:36:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:58.267 15:36:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:58.267 15:36:59 -- nvmf/common.sh@717 -- # local ip 00:13:58.267 15:36:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:58.267 15:36:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:58.267 15:36:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:58.267 15:36:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:58.267 15:36:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:58.267 15:36:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:58.267 15:36:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:58.267 15:36:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:58.267 15:36:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:58.267 15:36:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:13:58.267 15:36:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:58.267 15:36:59 -- common/autotest_common.sh@10 -- # set +x 00:13:58.525 nvme0n1 00:13:58.525 15:36:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:58.525 15:36:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:58.525 15:36:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:58.525 15:36:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:58.525 15:36:59 -- common/autotest_common.sh@10 -- # set +x 00:13:58.525 15:36:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:58.525 15:36:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:58.525 15:36:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:58.525 15:36:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:58.525 15:36:59 -- common/autotest_common.sh@10 -- # set +x 00:13:58.525 15:36:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:58.525 15:36:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:58.525 15:36:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:13:58.525 15:36:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:58.525 15:36:59 -- host/auth.sh@44 -- # digest=sha512 00:13:58.525 15:36:59 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:13:58.525 15:36:59 -- host/auth.sh@44 -- # keyid=2 00:13:58.525 15:36:59 -- host/auth.sh@45 -- # key=DHHC-1:01:YTU4NTIwZjAxZmZiYWVmNWNkMzUxYjM5YWYyZDljZWbmb8i5: 00:13:58.525 15:36:59 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:13:58.525 15:36:59 -- host/auth.sh@48 -- # echo ffdhe6144 00:13:58.525 15:36:59 -- host/auth.sh@49 -- # echo DHHC-1:01:YTU4NTIwZjAxZmZiYWVmNWNkMzUxYjM5YWYyZDljZWbmb8i5: 00:13:58.525 15:36:59 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:13:58.525 15:36:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:58.525 15:36:59 -- host/auth.sh@68 -- # digest=sha512 00:13:58.525 15:36:59 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:13:58.525 15:36:59 -- host/auth.sh@68 -- # keyid=2 00:13:58.525 15:36:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:58.525 15:36:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:58.525 15:36:59 -- common/autotest_common.sh@10 -- # set +x 00:13:58.525 15:36:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:58.525 15:36:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:58.525 15:36:59 -- nvmf/common.sh@717 -- # local ip 00:13:58.525 15:36:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:58.525 15:36:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:58.525 15:36:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:58.525 15:36:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:58.525 15:36:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:58.525 15:36:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:58.525 15:36:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:58.525 15:36:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:58.525 15:36:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:58.525 15:36:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:13:58.525 15:36:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:58.525 15:36:59 -- common/autotest_common.sh@10 -- # set +x 00:13:59.091 nvme0n1 00:13:59.091 15:37:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:59.091 15:37:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:59.091 15:37:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:59.091 15:37:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:59.091 15:37:00 -- common/autotest_common.sh@10 -- # set +x 00:13:59.091 15:37:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:59.091 15:37:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:59.091 15:37:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:59.091 15:37:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:59.091 15:37:00 -- common/autotest_common.sh@10 -- # set +x 00:13:59.091 15:37:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:59.091 15:37:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:59.091 15:37:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:13:59.091 15:37:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:59.091 15:37:00 -- host/auth.sh@44 -- # digest=sha512 00:13:59.091 15:37:00 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:13:59.091 15:37:00 -- host/auth.sh@44 -- # keyid=3 00:13:59.091 15:37:00 -- host/auth.sh@45 -- # key=DHHC-1:02:NDhhZTA0YmFkY2RlMTQ5NjQ3ZmQyZGNjNDUwNjFhMDExZmNjNDlhMGM3MDhkNDBj2LibMg==: 00:13:59.091 15:37:00 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:13:59.091 15:37:00 -- host/auth.sh@48 -- # echo ffdhe6144 00:13:59.091 15:37:00 -- host/auth.sh@49 -- # echo DHHC-1:02:NDhhZTA0YmFkY2RlMTQ5NjQ3ZmQyZGNjNDUwNjFhMDExZmNjNDlhMGM3MDhkNDBj2LibMg==: 00:13:59.091 15:37:00 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:13:59.091 15:37:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:59.091 15:37:00 -- host/auth.sh@68 -- # digest=sha512 00:13:59.091 15:37:00 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:13:59.091 15:37:00 -- host/auth.sh@68 -- # keyid=3 00:13:59.091 15:37:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:59.091 15:37:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:59.091 15:37:00 -- common/autotest_common.sh@10 -- # set +x 00:13:59.091 15:37:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:59.091 15:37:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:59.091 15:37:00 -- nvmf/common.sh@717 -- # local ip 00:13:59.091 15:37:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:59.091 15:37:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:59.091 15:37:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:59.091 15:37:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:59.091 15:37:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:59.091 15:37:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:59.091 15:37:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:59.091 15:37:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:59.091 15:37:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:59.091 15:37:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:13:59.091 15:37:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:59.091 15:37:00 -- common/autotest_common.sh@10 -- # set +x 00:13:59.349 nvme0n1 00:13:59.349 15:37:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:59.349 15:37:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:59.349 15:37:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:59.349 15:37:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:59.349 15:37:00 -- common/autotest_common.sh@10 -- # set +x 00:13:59.349 15:37:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:59.607 15:37:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:59.607 15:37:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:59.607 15:37:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:59.607 15:37:00 -- common/autotest_common.sh@10 -- # set +x 00:13:59.607 15:37:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:59.607 15:37:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:59.607 15:37:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:13:59.607 15:37:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:59.607 15:37:00 -- host/auth.sh@44 -- # digest=sha512 00:13:59.607 15:37:00 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:13:59.607 15:37:00 -- host/auth.sh@44 -- # keyid=4 00:13:59.607 15:37:00 -- host/auth.sh@45 -- # key=DHHC-1:03:M2ViNjM0YzZmYzc2ZjdjMDhhZjg5NDA1N2NkOWYzZWFlMDhiNzMyNTA2OWVjZDllZTk2MGFlNjJkOTkyMjQ5MX6hZE4=: 00:13:59.607 15:37:00 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:13:59.607 15:37:00 -- host/auth.sh@48 -- # echo ffdhe6144 00:13:59.607 15:37:00 -- host/auth.sh@49 -- # echo DHHC-1:03:M2ViNjM0YzZmYzc2ZjdjMDhhZjg5NDA1N2NkOWYzZWFlMDhiNzMyNTA2OWVjZDllZTk2MGFlNjJkOTkyMjQ5MX6hZE4=: 00:13:59.607 15:37:00 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:13:59.607 15:37:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:59.607 15:37:00 -- host/auth.sh@68 -- # digest=sha512 00:13:59.607 15:37:00 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:13:59.607 15:37:00 -- host/auth.sh@68 -- # keyid=4 00:13:59.607 15:37:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:59.607 15:37:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:59.607 15:37:00 -- common/autotest_common.sh@10 -- # set +x 00:13:59.607 15:37:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:59.607 15:37:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:59.607 15:37:00 -- nvmf/common.sh@717 -- # local ip 00:13:59.607 15:37:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:59.607 15:37:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:59.607 15:37:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:59.607 15:37:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:59.607 15:37:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:59.607 15:37:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:59.607 15:37:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:59.607 15:37:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:59.607 15:37:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:59.607 15:37:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:13:59.607 15:37:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:59.607 15:37:00 -- common/autotest_common.sh@10 -- # set +x 00:13:59.864 nvme0n1 00:13:59.865 15:37:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:59.865 15:37:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:13:59.865 15:37:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:59.865 15:37:01 -- common/autotest_common.sh@10 -- # set +x 00:13:59.865 15:37:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:13:59.865 15:37:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:59.865 15:37:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:59.865 15:37:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:59.865 15:37:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:59.865 15:37:01 -- common/autotest_common.sh@10 -- # set +x 00:13:59.865 15:37:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:59.865 15:37:01 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:13:59.865 15:37:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:13:59.865 15:37:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:13:59.865 15:37:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:13:59.865 15:37:01 -- host/auth.sh@44 -- # digest=sha512 00:13:59.865 15:37:01 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:13:59.865 15:37:01 -- host/auth.sh@44 -- # keyid=0 00:13:59.865 15:37:01 -- host/auth.sh@45 -- # key=DHHC-1:00:NzRiZGY4NzU4Y2M3OTkyYWExZTc4NDVlMzZlMDBiMTZcy3CZ: 00:13:59.865 15:37:01 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:13:59.865 15:37:01 -- host/auth.sh@48 -- # echo ffdhe8192 00:13:59.865 15:37:01 -- host/auth.sh@49 -- # echo DHHC-1:00:NzRiZGY4NzU4Y2M3OTkyYWExZTc4NDVlMzZlMDBiMTZcy3CZ: 00:13:59.865 15:37:01 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:13:59.865 15:37:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:13:59.865 15:37:01 -- host/auth.sh@68 -- # digest=sha512 00:13:59.865 15:37:01 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:13:59.865 15:37:01 -- host/auth.sh@68 -- # keyid=0 00:13:59.865 15:37:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:59.865 15:37:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:59.865 15:37:01 -- common/autotest_common.sh@10 -- # set +x 00:13:59.865 15:37:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:59.865 15:37:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:13:59.865 15:37:01 -- nvmf/common.sh@717 -- # local ip 00:13:59.865 15:37:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:13:59.865 15:37:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:13:59.865 15:37:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:59.865 15:37:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:59.865 15:37:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:13:59.865 15:37:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:59.865 15:37:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:13:59.865 15:37:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:13:59.865 15:37:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:13:59.865 15:37:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:13:59.865 15:37:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:59.865 15:37:01 -- common/autotest_common.sh@10 -- # set +x 00:14:00.797 nvme0n1 00:14:00.797 15:37:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:00.797 15:37:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:14:00.797 15:37:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:00.797 15:37:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:14:00.797 15:37:01 -- common/autotest_common.sh@10 -- # set +x 00:14:00.797 15:37:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:00.797 15:37:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:00.797 15:37:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:14:00.797 15:37:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:00.797 15:37:01 -- common/autotest_common.sh@10 -- # set +x 00:14:00.797 15:37:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:00.797 15:37:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:14:00.797 15:37:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:14:00.797 15:37:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:14:00.797 15:37:01 -- host/auth.sh@44 -- # digest=sha512 00:14:00.797 15:37:01 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:14:00.797 15:37:01 -- host/auth.sh@44 -- # keyid=1 00:14:00.797 15:37:01 -- host/auth.sh@45 -- # key=DHHC-1:00:MjliZjQ0ZTBmN2QxYTg3N2VkMzcwYzk0NTJiZjcwNzhkMzYzNzExYTkxOGI5ZDc1TVXigg==: 00:14:00.797 15:37:01 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:14:00.797 15:37:01 -- host/auth.sh@48 -- # echo ffdhe8192 00:14:00.797 15:37:01 -- host/auth.sh@49 -- # echo DHHC-1:00:MjliZjQ0ZTBmN2QxYTg3N2VkMzcwYzk0NTJiZjcwNzhkMzYzNzExYTkxOGI5ZDc1TVXigg==: 00:14:00.797 15:37:01 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:14:00.797 15:37:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:14:00.797 15:37:01 -- host/auth.sh@68 -- # digest=sha512 00:14:00.797 15:37:01 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:14:00.797 15:37:01 -- host/auth.sh@68 -- # keyid=1 00:14:00.797 15:37:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:00.797 15:37:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:00.797 15:37:01 -- common/autotest_common.sh@10 -- # set +x 00:14:00.797 15:37:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:00.797 15:37:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:14:00.797 15:37:01 -- nvmf/common.sh@717 -- # local ip 00:14:00.797 15:37:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:14:00.797 15:37:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:14:00.798 15:37:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:14:00.798 15:37:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:14:00.798 15:37:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:14:00.798 15:37:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:14:00.798 15:37:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:14:00.798 15:37:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:14:00.798 15:37:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:14:00.798 15:37:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:14:00.798 15:37:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:00.798 15:37:01 -- common/autotest_common.sh@10 -- # set +x 00:14:01.363 nvme0n1 00:14:01.363 15:37:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:01.363 15:37:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:14:01.364 15:37:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:01.364 15:37:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:14:01.364 15:37:02 -- common/autotest_common.sh@10 -- # set +x 00:14:01.364 15:37:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:01.364 15:37:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:01.364 15:37:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:14:01.364 15:37:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:01.364 15:37:02 -- common/autotest_common.sh@10 -- # set +x 00:14:01.364 15:37:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:01.364 15:37:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:14:01.364 15:37:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:14:01.364 15:37:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:14:01.364 15:37:02 -- host/auth.sh@44 -- # digest=sha512 00:14:01.364 15:37:02 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:14:01.364 15:37:02 -- host/auth.sh@44 -- # keyid=2 00:14:01.364 15:37:02 -- host/auth.sh@45 -- # key=DHHC-1:01:YTU4NTIwZjAxZmZiYWVmNWNkMzUxYjM5YWYyZDljZWbmb8i5: 00:14:01.364 15:37:02 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:14:01.364 15:37:02 -- host/auth.sh@48 -- # echo ffdhe8192 00:14:01.364 15:37:02 -- host/auth.sh@49 -- # echo DHHC-1:01:YTU4NTIwZjAxZmZiYWVmNWNkMzUxYjM5YWYyZDljZWbmb8i5: 00:14:01.364 15:37:02 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:14:01.364 15:37:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:14:01.364 15:37:02 -- host/auth.sh@68 -- # digest=sha512 00:14:01.364 15:37:02 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:14:01.364 15:37:02 -- host/auth.sh@68 -- # keyid=2 00:14:01.364 15:37:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:01.364 15:37:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:01.364 15:37:02 -- common/autotest_common.sh@10 -- # set +x 00:14:01.364 15:37:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:01.364 15:37:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:14:01.364 15:37:02 -- nvmf/common.sh@717 -- # local ip 00:14:01.364 15:37:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:14:01.364 15:37:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:14:01.364 15:37:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:14:01.364 15:37:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:14:01.364 15:37:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:14:01.364 15:37:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:14:01.364 15:37:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:14:01.364 15:37:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:14:01.364 15:37:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:14:01.364 15:37:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:14:01.364 15:37:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:01.364 15:37:02 -- common/autotest_common.sh@10 -- # set +x 00:14:01.931 nvme0n1 00:14:01.931 15:37:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:01.931 15:37:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:14:01.931 15:37:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:14:01.931 15:37:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:01.931 15:37:03 -- common/autotest_common.sh@10 -- # set +x 00:14:01.931 15:37:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:01.931 15:37:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:01.931 15:37:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:14:01.931 15:37:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:01.931 15:37:03 -- common/autotest_common.sh@10 -- # set +x 00:14:01.931 15:37:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:01.931 15:37:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:14:01.931 15:37:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:14:01.931 15:37:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:14:01.931 15:37:03 -- host/auth.sh@44 -- # digest=sha512 00:14:01.931 15:37:03 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:14:01.931 15:37:03 -- host/auth.sh@44 -- # keyid=3 00:14:01.931 15:37:03 -- host/auth.sh@45 -- # key=DHHC-1:02:NDhhZTA0YmFkY2RlMTQ5NjQ3ZmQyZGNjNDUwNjFhMDExZmNjNDlhMGM3MDhkNDBj2LibMg==: 00:14:01.931 15:37:03 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:14:01.931 15:37:03 -- host/auth.sh@48 -- # echo ffdhe8192 00:14:01.931 15:37:03 -- host/auth.sh@49 -- # echo DHHC-1:02:NDhhZTA0YmFkY2RlMTQ5NjQ3ZmQyZGNjNDUwNjFhMDExZmNjNDlhMGM3MDhkNDBj2LibMg==: 00:14:01.931 15:37:03 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:14:01.931 15:37:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:14:01.931 15:37:03 -- host/auth.sh@68 -- # digest=sha512 00:14:01.931 15:37:03 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:14:01.931 15:37:03 -- host/auth.sh@68 -- # keyid=3 00:14:01.931 15:37:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:01.931 15:37:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:01.931 15:37:03 -- common/autotest_common.sh@10 -- # set +x 00:14:01.931 15:37:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:01.931 15:37:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:14:01.931 15:37:03 -- nvmf/common.sh@717 -- # local ip 00:14:01.931 15:37:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:14:01.931 15:37:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:14:01.931 15:37:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:14:02.190 15:37:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:14:02.190 15:37:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:14:02.190 15:37:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:14:02.190 15:37:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:14:02.190 15:37:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:14:02.190 15:37:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:14:02.190 15:37:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:14:02.190 15:37:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:02.190 15:37:03 -- common/autotest_common.sh@10 -- # set +x 00:14:02.757 nvme0n1 00:14:02.757 15:37:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:02.757 15:37:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:14:02.757 15:37:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:14:02.757 15:37:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:02.757 15:37:03 -- common/autotest_common.sh@10 -- # set +x 00:14:02.757 15:37:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:02.757 15:37:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:02.757 15:37:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:14:02.757 15:37:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:02.757 15:37:04 -- common/autotest_common.sh@10 -- # set +x 00:14:02.757 15:37:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:02.757 15:37:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:14:02.757 15:37:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:14:02.757 15:37:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:14:02.757 15:37:04 -- host/auth.sh@44 -- # digest=sha512 00:14:02.757 15:37:04 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:14:02.757 15:37:04 -- host/auth.sh@44 -- # keyid=4 00:14:02.757 15:37:04 -- host/auth.sh@45 -- # key=DHHC-1:03:M2ViNjM0YzZmYzc2ZjdjMDhhZjg5NDA1N2NkOWYzZWFlMDhiNzMyNTA2OWVjZDllZTk2MGFlNjJkOTkyMjQ5MX6hZE4=: 00:14:02.757 15:37:04 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:14:02.757 15:37:04 -- host/auth.sh@48 -- # echo ffdhe8192 00:14:02.757 15:37:04 -- host/auth.sh@49 -- # echo DHHC-1:03:M2ViNjM0YzZmYzc2ZjdjMDhhZjg5NDA1N2NkOWYzZWFlMDhiNzMyNTA2OWVjZDllZTk2MGFlNjJkOTkyMjQ5MX6hZE4=: 00:14:02.757 15:37:04 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:14:02.757 15:37:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:14:02.757 15:37:04 -- host/auth.sh@68 -- # digest=sha512 00:14:02.757 15:37:04 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:14:02.757 15:37:04 -- host/auth.sh@68 -- # keyid=4 00:14:02.757 15:37:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:02.757 15:37:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:02.757 15:37:04 -- common/autotest_common.sh@10 -- # set +x 00:14:02.757 15:37:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:02.757 15:37:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:14:02.757 15:37:04 -- nvmf/common.sh@717 -- # local ip 00:14:02.757 15:37:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:14:02.757 15:37:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:14:02.757 15:37:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:14:02.757 15:37:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:14:02.757 15:37:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:14:02.757 15:37:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:14:02.757 15:37:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:14:02.757 15:37:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:14:02.757 15:37:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:14:02.757 15:37:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:14:02.757 15:37:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:02.757 15:37:04 -- common/autotest_common.sh@10 -- # set +x 00:14:03.378 nvme0n1 00:14:03.378 15:37:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:03.378 15:37:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:14:03.378 15:37:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:03.378 15:37:04 -- common/autotest_common.sh@10 -- # set +x 00:14:03.378 15:37:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:14:03.378 15:37:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:03.378 15:37:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:03.378 15:37:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:14:03.378 15:37:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:03.378 15:37:04 -- common/autotest_common.sh@10 -- # set +x 00:14:03.378 15:37:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:03.378 15:37:04 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:14:03.378 15:37:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:14:03.378 15:37:04 -- host/auth.sh@44 -- # digest=sha256 00:14:03.378 15:37:04 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:14:03.378 15:37:04 -- host/auth.sh@44 -- # keyid=1 00:14:03.378 15:37:04 -- host/auth.sh@45 -- # key=DHHC-1:00:MjliZjQ0ZTBmN2QxYTg3N2VkMzcwYzk0NTJiZjcwNzhkMzYzNzExYTkxOGI5ZDc1TVXigg==: 00:14:03.378 15:37:04 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:14:03.378 15:37:04 -- host/auth.sh@48 -- # echo ffdhe2048 00:14:03.378 15:37:04 -- host/auth.sh@49 -- # echo DHHC-1:00:MjliZjQ0ZTBmN2QxYTg3N2VkMzcwYzk0NTJiZjcwNzhkMzYzNzExYTkxOGI5ZDc1TVXigg==: 00:14:03.378 15:37:04 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:03.378 15:37:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:03.378 15:37:04 -- common/autotest_common.sh@10 -- # set +x 00:14:03.378 15:37:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:03.378 15:37:04 -- host/auth.sh@119 -- # get_main_ns_ip 00:14:03.378 15:37:04 -- nvmf/common.sh@717 -- # local ip 00:14:03.378 15:37:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:14:03.378 15:37:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:14:03.378 15:37:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:14:03.378 15:37:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:14:03.378 15:37:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:14:03.378 15:37:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:14:03.378 15:37:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:14:03.378 15:37:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:14:03.378 15:37:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:14:03.378 15:37:04 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:14:03.378 15:37:04 -- common/autotest_common.sh@638 -- # local es=0 00:14:03.378 15:37:04 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:14:03.378 15:37:04 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:14:03.378 15:37:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:03.378 15:37:04 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:14:03.378 15:37:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:03.378 15:37:04 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:14:03.378 15:37:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:03.378 15:37:04 -- common/autotest_common.sh@10 -- # set +x 00:14:03.378 request: 00:14:03.378 { 00:14:03.378 "name": "nvme0", 00:14:03.379 "trtype": "tcp", 00:14:03.379 "traddr": "10.0.0.1", 00:14:03.379 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:14:03.379 "adrfam": "ipv4", 00:14:03.379 "trsvcid": "4420", 00:14:03.379 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:14:03.379 "method": "bdev_nvme_attach_controller", 00:14:03.379 "req_id": 1 00:14:03.379 } 00:14:03.379 Got JSON-RPC error response 00:14:03.379 response: 00:14:03.379 { 00:14:03.379 "code": -32602, 00:14:03.379 "message": "Invalid parameters" 00:14:03.379 } 00:14:03.379 15:37:04 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:14:03.379 15:37:04 -- common/autotest_common.sh@641 -- # es=1 00:14:03.379 15:37:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:03.379 15:37:04 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:03.379 15:37:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:03.379 15:37:04 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:14:03.379 15:37:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:03.379 15:37:04 -- common/autotest_common.sh@10 -- # set +x 00:14:03.379 15:37:04 -- host/auth.sh@121 -- # jq length 00:14:03.379 15:37:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:03.637 15:37:04 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:14:03.637 15:37:04 -- host/auth.sh@124 -- # get_main_ns_ip 00:14:03.637 15:37:04 -- nvmf/common.sh@717 -- # local ip 00:14:03.637 15:37:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:14:03.637 15:37:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:14:03.637 15:37:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:14:03.637 15:37:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:14:03.637 15:37:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:14:03.637 15:37:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:14:03.637 15:37:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:14:03.637 15:37:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:14:03.637 15:37:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:14:03.637 15:37:04 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:14:03.637 15:37:04 -- common/autotest_common.sh@638 -- # local es=0 00:14:03.637 15:37:04 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:14:03.637 15:37:04 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:14:03.637 15:37:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:03.637 15:37:04 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:14:03.637 15:37:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:03.637 15:37:04 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:14:03.637 15:37:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:03.637 15:37:04 -- common/autotest_common.sh@10 -- # set +x 00:14:03.637 request: 00:14:03.637 { 00:14:03.637 "name": "nvme0", 00:14:03.637 "trtype": "tcp", 00:14:03.637 "traddr": "10.0.0.1", 00:14:03.637 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:14:03.637 "adrfam": "ipv4", 00:14:03.637 "trsvcid": "4420", 00:14:03.637 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:14:03.637 "dhchap_key": "key2", 00:14:03.637 "method": "bdev_nvme_attach_controller", 00:14:03.637 "req_id": 1 00:14:03.637 } 00:14:03.637 Got JSON-RPC error response 00:14:03.637 response: 00:14:03.637 { 00:14:03.637 "code": -32602, 00:14:03.637 "message": "Invalid parameters" 00:14:03.637 } 00:14:03.637 15:37:04 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:14:03.637 15:37:04 -- common/autotest_common.sh@641 -- # es=1 00:14:03.637 15:37:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:03.637 15:37:04 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:03.637 15:37:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:03.637 15:37:04 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:14:03.637 15:37:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:03.637 15:37:04 -- host/auth.sh@127 -- # jq length 00:14:03.637 15:37:04 -- common/autotest_common.sh@10 -- # set +x 00:14:03.637 15:37:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:03.637 15:37:04 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:14:03.637 15:37:04 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:14:03.637 15:37:04 -- host/auth.sh@130 -- # cleanup 00:14:03.637 15:37:04 -- host/auth.sh@24 -- # nvmftestfini 00:14:03.637 15:37:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:03.637 15:37:04 -- nvmf/common.sh@117 -- # sync 00:14:03.637 15:37:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:03.637 15:37:04 -- nvmf/common.sh@120 -- # set +e 00:14:03.637 15:37:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:03.637 15:37:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:03.637 rmmod nvme_tcp 00:14:03.637 rmmod nvme_fabrics 00:14:03.637 15:37:05 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:03.637 15:37:05 -- nvmf/common.sh@124 -- # set -e 00:14:03.637 15:37:05 -- nvmf/common.sh@125 -- # return 0 00:14:03.637 15:37:05 -- nvmf/common.sh@478 -- # '[' -n 74644 ']' 00:14:03.637 15:37:05 -- nvmf/common.sh@479 -- # killprocess 74644 00:14:03.637 15:37:05 -- common/autotest_common.sh@936 -- # '[' -z 74644 ']' 00:14:03.637 15:37:05 -- common/autotest_common.sh@940 -- # kill -0 74644 00:14:03.637 15:37:05 -- common/autotest_common.sh@941 -- # uname 00:14:03.637 15:37:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:03.637 15:37:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74644 00:14:03.637 killing process with pid 74644 00:14:03.637 15:37:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:03.637 15:37:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:03.637 15:37:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74644' 00:14:03.637 15:37:05 -- common/autotest_common.sh@955 -- # kill 74644 00:14:03.637 15:37:05 -- common/autotest_common.sh@960 -- # wait 74644 00:14:04.203 15:37:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:04.203 15:37:05 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:04.203 15:37:05 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:04.203 15:37:05 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:04.203 15:37:05 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:04.203 15:37:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.203 15:37:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:04.203 15:37:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.203 15:37:05 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:04.203 15:37:05 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:14:04.203 15:37:05 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:14:04.203 15:37:05 -- host/auth.sh@27 -- # clean_kernel_target 00:14:04.203 15:37:05 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:14:04.204 15:37:05 -- nvmf/common.sh@675 -- # echo 0 00:14:04.204 15:37:05 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:14:04.204 15:37:05 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:14:04.204 15:37:05 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:14:04.204 15:37:05 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:14:04.204 15:37:05 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:14:04.204 15:37:05 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:14:04.204 15:37:05 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:04.769 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:05.027 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:05.027 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:05.027 15:37:06 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.TaT /tmp/spdk.key-null.wlJ /tmp/spdk.key-sha256.Wi4 /tmp/spdk.key-sha384.suH /tmp/spdk.key-sha512.CVX /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:14:05.027 15:37:06 -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:05.284 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:05.284 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:05.284 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:05.284 ************************************ 00:14:05.284 END TEST nvmf_auth 00:14:05.284 ************************************ 00:14:05.284 00:14:05.284 real 0m38.806s 00:14:05.284 user 0m34.853s 00:14:05.284 sys 0m3.579s 00:14:05.284 15:37:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:05.284 15:37:06 -- common/autotest_common.sh@10 -- # set +x 00:14:05.543 15:37:06 -- nvmf/nvmf.sh@104 -- # [[ tcp == \t\c\p ]] 00:14:05.543 15:37:06 -- nvmf/nvmf.sh@105 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:14:05.543 15:37:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:05.543 15:37:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:05.543 15:37:06 -- common/autotest_common.sh@10 -- # set +x 00:14:05.543 ************************************ 00:14:05.543 START TEST nvmf_digest 00:14:05.543 ************************************ 00:14:05.543 15:37:06 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:14:05.543 * Looking for test storage... 00:14:05.543 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:05.543 15:37:06 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:05.543 15:37:06 -- nvmf/common.sh@7 -- # uname -s 00:14:05.543 15:37:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:05.543 15:37:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:05.543 15:37:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:05.543 15:37:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:05.543 15:37:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:05.543 15:37:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:05.543 15:37:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:05.543 15:37:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:05.543 15:37:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:05.543 15:37:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:05.543 15:37:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02dfa913-00e4-4a25-ab2c-855f7283d4db 00:14:05.543 15:37:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=02dfa913-00e4-4a25-ab2c-855f7283d4db 00:14:05.543 15:37:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:05.543 15:37:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:05.543 15:37:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:05.543 15:37:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:05.543 15:37:06 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:05.543 15:37:06 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:05.543 15:37:06 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:05.543 15:37:06 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:05.543 15:37:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.543 15:37:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.543 15:37:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.543 15:37:06 -- paths/export.sh@5 -- # export PATH 00:14:05.543 15:37:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.543 15:37:06 -- nvmf/common.sh@47 -- # : 0 00:14:05.543 15:37:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:05.543 15:37:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:05.543 15:37:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:05.543 15:37:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:05.543 15:37:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:05.543 15:37:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:05.543 15:37:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:05.543 15:37:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:05.543 15:37:06 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:14:05.543 15:37:06 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:14:05.543 15:37:06 -- host/digest.sh@16 -- # runtime=2 00:14:05.543 15:37:06 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:14:05.543 15:37:06 -- host/digest.sh@138 -- # nvmftestinit 00:14:05.543 15:37:06 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:05.543 15:37:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:05.543 15:37:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:05.543 15:37:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:05.543 15:37:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:05.543 15:37:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.543 15:37:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:05.543 15:37:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:05.543 15:37:06 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:14:05.543 15:37:06 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:14:05.543 15:37:06 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:14:05.543 15:37:06 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:14:05.543 15:37:06 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:14:05.543 15:37:06 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:14:05.543 15:37:06 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:05.543 15:37:06 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:05.543 15:37:06 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:05.543 15:37:06 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:05.543 15:37:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:05.543 15:37:06 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:05.543 15:37:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:05.543 15:37:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:05.543 15:37:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:05.543 15:37:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:05.543 15:37:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:05.543 15:37:06 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:05.543 15:37:06 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:05.543 15:37:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:05.543 Cannot find device "nvmf_tgt_br" 00:14:05.543 15:37:06 -- nvmf/common.sh@155 -- # true 00:14:05.543 15:37:06 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:05.543 Cannot find device "nvmf_tgt_br2" 00:14:05.543 15:37:06 -- nvmf/common.sh@156 -- # true 00:14:05.543 15:37:06 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:05.802 15:37:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:05.802 Cannot find device "nvmf_tgt_br" 00:14:05.802 15:37:06 -- nvmf/common.sh@158 -- # true 00:14:05.802 15:37:06 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:05.802 Cannot find device "nvmf_tgt_br2" 00:14:05.802 15:37:07 -- nvmf/common.sh@159 -- # true 00:14:05.802 15:37:07 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:05.802 15:37:07 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:05.802 15:37:07 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:05.802 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:05.802 15:37:07 -- nvmf/common.sh@162 -- # true 00:14:05.802 15:37:07 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:05.802 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:05.802 15:37:07 -- nvmf/common.sh@163 -- # true 00:14:05.802 15:37:07 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:05.802 15:37:07 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:05.802 15:37:07 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:05.802 15:37:07 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:05.802 15:37:07 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:05.802 15:37:07 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:05.802 15:37:07 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:05.802 15:37:07 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:05.802 15:37:07 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:05.802 15:37:07 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:05.802 15:37:07 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:05.802 15:37:07 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:05.802 15:37:07 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:05.802 15:37:07 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:05.802 15:37:07 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:05.802 15:37:07 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:05.802 15:37:07 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:05.802 15:37:07 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:05.802 15:37:07 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:05.802 15:37:07 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:05.802 15:37:07 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:05.802 15:37:07 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:05.802 15:37:07 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:05.802 15:37:07 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:06.060 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:06.060 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:14:06.060 00:14:06.060 --- 10.0.0.2 ping statistics --- 00:14:06.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.060 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:14:06.060 15:37:07 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:06.060 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:06.060 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:14:06.060 00:14:06.060 --- 10.0.0.3 ping statistics --- 00:14:06.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.060 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:14:06.060 15:37:07 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:06.060 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:06.060 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:14:06.060 00:14:06.060 --- 10.0.0.1 ping statistics --- 00:14:06.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.060 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:14:06.060 15:37:07 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:06.060 15:37:07 -- nvmf/common.sh@422 -- # return 0 00:14:06.060 15:37:07 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:06.060 15:37:07 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:06.060 15:37:07 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:06.060 15:37:07 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:06.060 15:37:07 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:06.060 15:37:07 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:06.060 15:37:07 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:06.060 15:37:07 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:06.060 15:37:07 -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:14:06.060 15:37:07 -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:14:06.060 15:37:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:06.060 15:37:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:06.060 15:37:07 -- common/autotest_common.sh@10 -- # set +x 00:14:06.060 ************************************ 00:14:06.060 START TEST nvmf_digest_clean 00:14:06.060 ************************************ 00:14:06.060 15:37:07 -- common/autotest_common.sh@1111 -- # run_digest 00:14:06.060 15:37:07 -- host/digest.sh@120 -- # local dsa_initiator 00:14:06.060 15:37:07 -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:14:06.060 15:37:07 -- host/digest.sh@121 -- # dsa_initiator=false 00:14:06.060 15:37:07 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:14:06.060 15:37:07 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:14:06.060 15:37:07 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:06.060 15:37:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:06.060 15:37:07 -- common/autotest_common.sh@10 -- # set +x 00:14:06.060 15:37:07 -- nvmf/common.sh@470 -- # nvmfpid=76250 00:14:06.060 15:37:07 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:14:06.060 15:37:07 -- nvmf/common.sh@471 -- # waitforlisten 76250 00:14:06.060 15:37:07 -- common/autotest_common.sh@817 -- # '[' -z 76250 ']' 00:14:06.060 15:37:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:06.060 15:37:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:06.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:06.060 15:37:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:06.060 15:37:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:06.060 15:37:07 -- common/autotest_common.sh@10 -- # set +x 00:14:06.060 [2024-04-17 15:37:07.421009] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:14:06.060 [2024-04-17 15:37:07.421804] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:06.319 [2024-04-17 15:37:07.564810] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.319 [2024-04-17 15:37:07.723574] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:06.319 [2024-04-17 15:37:07.723908] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:06.319 [2024-04-17 15:37:07.724147] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:06.319 [2024-04-17 15:37:07.724353] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:06.319 [2024-04-17 15:37:07.724464] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:06.319 [2024-04-17 15:37:07.724598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.265 15:37:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:07.265 15:37:08 -- common/autotest_common.sh@850 -- # return 0 00:14:07.265 15:37:08 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:07.265 15:37:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:07.265 15:37:08 -- common/autotest_common.sh@10 -- # set +x 00:14:07.265 15:37:08 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:07.265 15:37:08 -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:14:07.265 15:37:08 -- host/digest.sh@126 -- # common_target_config 00:14:07.265 15:37:08 -- host/digest.sh@43 -- # rpc_cmd 00:14:07.265 15:37:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:07.265 15:37:08 -- common/autotest_common.sh@10 -- # set +x 00:14:07.265 null0 00:14:07.265 [2024-04-17 15:37:08.531004] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:07.265 [2024-04-17 15:37:08.555149] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:07.265 15:37:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:07.265 15:37:08 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:14:07.265 15:37:08 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:14:07.265 15:37:08 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:14:07.265 15:37:08 -- host/digest.sh@80 -- # rw=randread 00:14:07.265 15:37:08 -- host/digest.sh@80 -- # bs=4096 00:14:07.265 15:37:08 -- host/digest.sh@80 -- # qd=128 00:14:07.265 15:37:08 -- host/digest.sh@80 -- # scan_dsa=false 00:14:07.265 15:37:08 -- host/digest.sh@83 -- # bperfpid=76288 00:14:07.265 15:37:08 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:14:07.265 15:37:08 -- host/digest.sh@84 -- # waitforlisten 76288 /var/tmp/bperf.sock 00:14:07.265 15:37:08 -- common/autotest_common.sh@817 -- # '[' -z 76288 ']' 00:14:07.265 15:37:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:14:07.265 15:37:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:07.265 15:37:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:14:07.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:14:07.265 15:37:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:07.265 15:37:08 -- common/autotest_common.sh@10 -- # set +x 00:14:07.265 [2024-04-17 15:37:08.607030] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:14:07.265 [2024-04-17 15:37:08.607590] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76288 ] 00:14:07.523 [2024-04-17 15:37:08.745051] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.523 [2024-04-17 15:37:08.888996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:08.456 15:37:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:08.456 15:37:09 -- common/autotest_common.sh@850 -- # return 0 00:14:08.456 15:37:09 -- host/digest.sh@86 -- # false 00:14:08.456 15:37:09 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:14:08.456 15:37:09 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:14:08.713 15:37:09 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:14:08.713 15:37:09 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:14:08.971 nvme0n1 00:14:08.971 15:37:10 -- host/digest.sh@92 -- # bperf_py perform_tests 00:14:08.971 15:37:10 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:14:08.971 Running I/O for 2 seconds... 00:14:11.500 00:14:11.500 Latency(us) 00:14:11.500 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:11.500 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:14:11.501 nvme0n1 : 2.01 14682.24 57.35 0.00 0.00 8712.09 8043.05 20018.27 00:14:11.501 =================================================================================================================== 00:14:11.501 Total : 14682.24 57.35 0.00 0.00 8712.09 8043.05 20018.27 00:14:11.501 0 00:14:11.501 15:37:12 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:14:11.501 15:37:12 -- host/digest.sh@93 -- # get_accel_stats 00:14:11.501 15:37:12 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:14:11.501 15:37:12 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:14:11.501 | select(.opcode=="crc32c") 00:14:11.501 | "\(.module_name) \(.executed)"' 00:14:11.501 15:37:12 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:14:11.501 15:37:12 -- host/digest.sh@94 -- # false 00:14:11.501 15:37:12 -- host/digest.sh@94 -- # exp_module=software 00:14:11.501 15:37:12 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:14:11.501 15:37:12 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:11.501 15:37:12 -- host/digest.sh@98 -- # killprocess 76288 00:14:11.501 15:37:12 -- common/autotest_common.sh@936 -- # '[' -z 76288 ']' 00:14:11.501 15:37:12 -- common/autotest_common.sh@940 -- # kill -0 76288 00:14:11.501 15:37:12 -- common/autotest_common.sh@941 -- # uname 00:14:11.501 15:37:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:11.501 15:37:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76288 00:14:11.501 killing process with pid 76288 00:14:11.501 Received shutdown signal, test time was about 2.000000 seconds 00:14:11.501 00:14:11.501 Latency(us) 00:14:11.501 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:11.501 =================================================================================================================== 00:14:11.501 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:11.501 15:37:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:11.501 15:37:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:11.501 15:37:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76288' 00:14:11.501 15:37:12 -- common/autotest_common.sh@955 -- # kill 76288 00:14:11.501 15:37:12 -- common/autotest_common.sh@960 -- # wait 76288 00:14:11.759 15:37:13 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:14:11.759 15:37:13 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:14:11.759 15:37:13 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:14:11.759 15:37:13 -- host/digest.sh@80 -- # rw=randread 00:14:11.759 15:37:13 -- host/digest.sh@80 -- # bs=131072 00:14:11.759 15:37:13 -- host/digest.sh@80 -- # qd=16 00:14:11.759 15:37:13 -- host/digest.sh@80 -- # scan_dsa=false 00:14:11.759 15:37:13 -- host/digest.sh@83 -- # bperfpid=76348 00:14:11.759 15:37:13 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:14:11.759 15:37:13 -- host/digest.sh@84 -- # waitforlisten 76348 /var/tmp/bperf.sock 00:14:11.759 15:37:13 -- common/autotest_common.sh@817 -- # '[' -z 76348 ']' 00:14:11.759 15:37:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:14:11.759 15:37:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:11.759 15:37:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:14:11.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:14:11.759 15:37:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:11.759 15:37:13 -- common/autotest_common.sh@10 -- # set +x 00:14:11.759 [2024-04-17 15:37:13.102518] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:14:11.759 [2024-04-17 15:37:13.103144] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76348 ] 00:14:11.759 I/O size of 131072 is greater than zero copy threshold (65536). 00:14:11.759 Zero copy mechanism will not be used. 00:14:12.016 [2024-04-17 15:37:13.248700] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.016 [2024-04-17 15:37:13.396053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:12.949 15:37:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:12.949 15:37:14 -- common/autotest_common.sh@850 -- # return 0 00:14:12.949 15:37:14 -- host/digest.sh@86 -- # false 00:14:12.949 15:37:14 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:14:12.949 15:37:14 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:14:13.208 15:37:14 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:14:13.208 15:37:14 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:14:13.465 nvme0n1 00:14:13.465 15:37:14 -- host/digest.sh@92 -- # bperf_py perform_tests 00:14:13.465 15:37:14 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:14:13.774 I/O size of 131072 is greater than zero copy threshold (65536). 00:14:13.774 Zero copy mechanism will not be used. 00:14:13.774 Running I/O for 2 seconds... 00:14:15.719 00:14:15.719 Latency(us) 00:14:15.719 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:15.719 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:14:15.719 nvme0n1 : 2.00 7386.14 923.27 0.00 0.00 2163.05 1966.08 7119.59 00:14:15.719 =================================================================================================================== 00:14:15.720 Total : 7386.14 923.27 0.00 0.00 2163.05 1966.08 7119.59 00:14:15.720 0 00:14:15.720 15:37:16 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:14:15.720 15:37:16 -- host/digest.sh@93 -- # get_accel_stats 00:14:15.720 15:37:16 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:14:15.720 15:37:16 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:14:15.720 | select(.opcode=="crc32c") 00:14:15.720 | "\(.module_name) \(.executed)"' 00:14:15.720 15:37:16 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:14:15.978 15:37:17 -- host/digest.sh@94 -- # false 00:14:15.978 15:37:17 -- host/digest.sh@94 -- # exp_module=software 00:14:15.978 15:37:17 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:14:15.978 15:37:17 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:15.978 15:37:17 -- host/digest.sh@98 -- # killprocess 76348 00:14:15.978 15:37:17 -- common/autotest_common.sh@936 -- # '[' -z 76348 ']' 00:14:15.978 15:37:17 -- common/autotest_common.sh@940 -- # kill -0 76348 00:14:15.978 15:37:17 -- common/autotest_common.sh@941 -- # uname 00:14:15.978 15:37:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:15.978 15:37:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76348 00:14:15.978 killing process with pid 76348 00:14:15.978 Received shutdown signal, test time was about 2.000000 seconds 00:14:15.978 00:14:15.978 Latency(us) 00:14:15.978 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:15.978 =================================================================================================================== 00:14:15.978 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:15.978 15:37:17 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:15.978 15:37:17 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:15.978 15:37:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76348' 00:14:15.978 15:37:17 -- common/autotest_common.sh@955 -- # kill 76348 00:14:15.978 15:37:17 -- common/autotest_common.sh@960 -- # wait 76348 00:14:16.237 15:37:17 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:14:16.237 15:37:17 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:14:16.237 15:37:17 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:14:16.237 15:37:17 -- host/digest.sh@80 -- # rw=randwrite 00:14:16.237 15:37:17 -- host/digest.sh@80 -- # bs=4096 00:14:16.237 15:37:17 -- host/digest.sh@80 -- # qd=128 00:14:16.237 15:37:17 -- host/digest.sh@80 -- # scan_dsa=false 00:14:16.237 15:37:17 -- host/digest.sh@83 -- # bperfpid=76408 00:14:16.237 15:37:17 -- host/digest.sh@84 -- # waitforlisten 76408 /var/tmp/bperf.sock 00:14:16.237 15:37:17 -- common/autotest_common.sh@817 -- # '[' -z 76408 ']' 00:14:16.237 15:37:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:14:16.237 15:37:17 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:14:16.237 15:37:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:16.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:14:16.237 15:37:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:14:16.237 15:37:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:16.237 15:37:17 -- common/autotest_common.sh@10 -- # set +x 00:14:16.496 [2024-04-17 15:37:17.702914] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:14:16.496 [2024-04-17 15:37:17.703397] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76408 ] 00:14:16.496 [2024-04-17 15:37:17.840797] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.754 [2024-04-17 15:37:17.991299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:17.320 15:37:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:17.320 15:37:18 -- common/autotest_common.sh@850 -- # return 0 00:14:17.320 15:37:18 -- host/digest.sh@86 -- # false 00:14:17.320 15:37:18 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:14:17.320 15:37:18 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:14:17.888 15:37:19 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:14:17.888 15:37:19 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:14:18.146 nvme0n1 00:14:18.146 15:37:19 -- host/digest.sh@92 -- # bperf_py perform_tests 00:14:18.146 15:37:19 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:14:18.146 Running I/O for 2 seconds... 00:14:20.745 00:14:20.745 Latency(us) 00:14:20.745 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:20.745 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:20.745 nvme0n1 : 2.00 15206.50 59.40 0.00 0.00 8410.21 2919.33 19422.49 00:14:20.745 =================================================================================================================== 00:14:20.745 Total : 15206.50 59.40 0.00 0.00 8410.21 2919.33 19422.49 00:14:20.745 0 00:14:20.745 15:37:21 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:14:20.745 15:37:21 -- host/digest.sh@93 -- # get_accel_stats 00:14:20.745 15:37:21 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:14:20.745 15:37:21 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:14:20.745 | select(.opcode=="crc32c") 00:14:20.745 | "\(.module_name) \(.executed)"' 00:14:20.745 15:37:21 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:14:20.745 15:37:21 -- host/digest.sh@94 -- # false 00:14:20.745 15:37:21 -- host/digest.sh@94 -- # exp_module=software 00:14:20.745 15:37:21 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:14:20.745 15:37:21 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:20.745 15:37:21 -- host/digest.sh@98 -- # killprocess 76408 00:14:20.745 15:37:21 -- common/autotest_common.sh@936 -- # '[' -z 76408 ']' 00:14:20.745 15:37:21 -- common/autotest_common.sh@940 -- # kill -0 76408 00:14:20.745 15:37:21 -- common/autotest_common.sh@941 -- # uname 00:14:20.745 15:37:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:20.745 15:37:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76408 00:14:20.745 killing process with pid 76408 00:14:20.745 Received shutdown signal, test time was about 2.000000 seconds 00:14:20.745 00:14:20.745 Latency(us) 00:14:20.745 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:20.745 =================================================================================================================== 00:14:20.745 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:20.745 15:37:21 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:20.745 15:37:21 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:20.745 15:37:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76408' 00:14:20.745 15:37:21 -- common/autotest_common.sh@955 -- # kill 76408 00:14:20.745 15:37:21 -- common/autotest_common.sh@960 -- # wait 76408 00:14:21.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:14:21.004 15:37:22 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:14:21.004 15:37:22 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:14:21.004 15:37:22 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:14:21.004 15:37:22 -- host/digest.sh@80 -- # rw=randwrite 00:14:21.004 15:37:22 -- host/digest.sh@80 -- # bs=131072 00:14:21.004 15:37:22 -- host/digest.sh@80 -- # qd=16 00:14:21.004 15:37:22 -- host/digest.sh@80 -- # scan_dsa=false 00:14:21.004 15:37:22 -- host/digest.sh@83 -- # bperfpid=76474 00:14:21.004 15:37:22 -- host/digest.sh@84 -- # waitforlisten 76474 /var/tmp/bperf.sock 00:14:21.004 15:37:22 -- common/autotest_common.sh@817 -- # '[' -z 76474 ']' 00:14:21.004 15:37:22 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:14:21.004 15:37:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:14:21.004 15:37:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:21.004 15:37:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:14:21.004 15:37:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:21.004 15:37:22 -- common/autotest_common.sh@10 -- # set +x 00:14:21.004 [2024-04-17 15:37:22.281746] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:14:21.004 [2024-04-17 15:37:22.282330] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefixI/O size of 131072 is greater than zero copy threshold (65536). 00:14:21.004 Zero copy mechanism will not be used. 00:14:21.004 =spdk_pid76474 ] 00:14:21.004 [2024-04-17 15:37:22.428747] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.262 [2024-04-17 15:37:22.600458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:22.195 15:37:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:22.195 15:37:23 -- common/autotest_common.sh@850 -- # return 0 00:14:22.195 15:37:23 -- host/digest.sh@86 -- # false 00:14:22.195 15:37:23 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:14:22.195 15:37:23 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:14:22.453 15:37:23 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:14:22.453 15:37:23 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:14:22.711 nvme0n1 00:14:22.711 15:37:24 -- host/digest.sh@92 -- # bperf_py perform_tests 00:14:22.711 15:37:24 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:14:22.968 I/O size of 131072 is greater than zero copy threshold (65536). 00:14:22.968 Zero copy mechanism will not be used. 00:14:22.968 Running I/O for 2 seconds... 00:14:24.910 00:14:24.910 Latency(us) 00:14:24.910 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:24.910 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:14:24.910 nvme0n1 : 2.00 6238.08 779.76 0.00 0.00 2559.26 1683.08 4468.36 00:14:24.910 =================================================================================================================== 00:14:24.910 Total : 6238.08 779.76 0.00 0.00 2559.26 1683.08 4468.36 00:14:24.910 0 00:14:24.910 15:37:26 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:14:24.910 15:37:26 -- host/digest.sh@93 -- # get_accel_stats 00:14:24.910 15:37:26 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:14:24.910 15:37:26 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:14:24.910 | select(.opcode=="crc32c") 00:14:24.910 | "\(.module_name) \(.executed)"' 00:14:24.910 15:37:26 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:14:25.169 15:37:26 -- host/digest.sh@94 -- # false 00:14:25.169 15:37:26 -- host/digest.sh@94 -- # exp_module=software 00:14:25.169 15:37:26 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:14:25.169 15:37:26 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:25.169 15:37:26 -- host/digest.sh@98 -- # killprocess 76474 00:14:25.169 15:37:26 -- common/autotest_common.sh@936 -- # '[' -z 76474 ']' 00:14:25.169 15:37:26 -- common/autotest_common.sh@940 -- # kill -0 76474 00:14:25.169 15:37:26 -- common/autotest_common.sh@941 -- # uname 00:14:25.169 15:37:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:25.169 15:37:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76474 00:14:25.169 killing process with pid 76474 00:14:25.169 Received shutdown signal, test time was about 2.000000 seconds 00:14:25.169 00:14:25.169 Latency(us) 00:14:25.169 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:25.169 =================================================================================================================== 00:14:25.169 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:25.169 15:37:26 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:25.169 15:37:26 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:25.169 15:37:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76474' 00:14:25.169 15:37:26 -- common/autotest_common.sh@955 -- # kill 76474 00:14:25.169 15:37:26 -- common/autotest_common.sh@960 -- # wait 76474 00:14:25.735 15:37:26 -- host/digest.sh@132 -- # killprocess 76250 00:14:25.735 15:37:26 -- common/autotest_common.sh@936 -- # '[' -z 76250 ']' 00:14:25.735 15:37:26 -- common/autotest_common.sh@940 -- # kill -0 76250 00:14:25.735 15:37:26 -- common/autotest_common.sh@941 -- # uname 00:14:25.735 15:37:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:25.735 15:37:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76250 00:14:25.735 killing process with pid 76250 00:14:25.735 15:37:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:25.735 15:37:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:25.735 15:37:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76250' 00:14:25.735 15:37:26 -- common/autotest_common.sh@955 -- # kill 76250 00:14:25.735 15:37:26 -- common/autotest_common.sh@960 -- # wait 76250 00:14:25.993 00:14:25.993 real 0m19.931s 00:14:25.993 user 0m38.692s 00:14:25.993 sys 0m4.943s 00:14:25.993 ************************************ 00:14:25.993 END TEST nvmf_digest_clean 00:14:25.993 ************************************ 00:14:25.993 15:37:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:25.993 15:37:27 -- common/autotest_common.sh@10 -- # set +x 00:14:25.993 15:37:27 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:14:25.993 15:37:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:25.993 15:37:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:25.993 15:37:27 -- common/autotest_common.sh@10 -- # set +x 00:14:25.993 ************************************ 00:14:25.993 START TEST nvmf_digest_error 00:14:25.993 ************************************ 00:14:25.993 15:37:27 -- common/autotest_common.sh@1111 -- # run_digest_error 00:14:25.993 15:37:27 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:14:25.993 15:37:27 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:25.993 15:37:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:25.993 15:37:27 -- common/autotest_common.sh@10 -- # set +x 00:14:25.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.993 15:37:27 -- nvmf/common.sh@470 -- # nvmfpid=76567 00:14:25.993 15:37:27 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:14:25.993 15:37:27 -- nvmf/common.sh@471 -- # waitforlisten 76567 00:14:25.993 15:37:27 -- common/autotest_common.sh@817 -- # '[' -z 76567 ']' 00:14:25.993 15:37:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.993 15:37:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:25.993 15:37:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.993 15:37:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:25.993 15:37:27 -- common/autotest_common.sh@10 -- # set +x 00:14:26.251 [2024-04-17 15:37:27.453299] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:14:26.251 [2024-04-17 15:37:27.453415] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:26.251 [2024-04-17 15:37:27.588575] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.509 [2024-04-17 15:37:27.735544] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:26.509 [2024-04-17 15:37:27.735611] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:26.509 [2024-04-17 15:37:27.735625] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:26.509 [2024-04-17 15:37:27.735634] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:26.509 [2024-04-17 15:37:27.735642] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:26.509 [2024-04-17 15:37:27.735681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.074 15:37:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:27.074 15:37:28 -- common/autotest_common.sh@850 -- # return 0 00:14:27.074 15:37:28 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:27.074 15:37:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:27.074 15:37:28 -- common/autotest_common.sh@10 -- # set +x 00:14:27.074 15:37:28 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:27.074 15:37:28 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:14:27.074 15:37:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:27.074 15:37:28 -- common/autotest_common.sh@10 -- # set +x 00:14:27.074 [2024-04-17 15:37:28.504329] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:14:27.074 15:37:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:27.074 15:37:28 -- host/digest.sh@105 -- # common_target_config 00:14:27.074 15:37:28 -- host/digest.sh@43 -- # rpc_cmd 00:14:27.074 15:37:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:27.074 15:37:28 -- common/autotest_common.sh@10 -- # set +x 00:14:27.332 null0 00:14:27.332 [2024-04-17 15:37:28.653231] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:27.332 [2024-04-17 15:37:28.677415] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:27.332 15:37:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:27.332 15:37:28 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:14:27.332 15:37:28 -- host/digest.sh@54 -- # local rw bs qd 00:14:27.332 15:37:28 -- host/digest.sh@56 -- # rw=randread 00:14:27.332 15:37:28 -- host/digest.sh@56 -- # bs=4096 00:14:27.332 15:37:28 -- host/digest.sh@56 -- # qd=128 00:14:27.332 15:37:28 -- host/digest.sh@58 -- # bperfpid=76599 00:14:27.332 15:37:28 -- host/digest.sh@60 -- # waitforlisten 76599 /var/tmp/bperf.sock 00:14:27.332 15:37:28 -- common/autotest_common.sh@817 -- # '[' -z 76599 ']' 00:14:27.332 15:37:28 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:14:27.332 15:37:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:14:27.332 15:37:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:27.332 15:37:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:14:27.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:14:27.332 15:37:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:27.332 15:37:28 -- common/autotest_common.sh@10 -- # set +x 00:14:27.332 [2024-04-17 15:37:28.735548] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:14:27.332 [2024-04-17 15:37:28.736052] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76599 ] 00:14:27.591 [2024-04-17 15:37:28.872060] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.591 [2024-04-17 15:37:29.019260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:28.527 15:37:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:28.527 15:37:29 -- common/autotest_common.sh@850 -- # return 0 00:14:28.527 15:37:29 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:14:28.527 15:37:29 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:14:28.527 15:37:29 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:14:28.527 15:37:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:28.527 15:37:29 -- common/autotest_common.sh@10 -- # set +x 00:14:28.527 15:37:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:28.527 15:37:29 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:14:28.527 15:37:29 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:14:28.786 nvme0n1 00:14:28.786 15:37:30 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:14:28.786 15:37:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:28.786 15:37:30 -- common/autotest_common.sh@10 -- # set +x 00:14:28.786 15:37:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:28.786 15:37:30 -- host/digest.sh@69 -- # bperf_py perform_tests 00:14:28.786 15:37:30 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:14:29.045 Running I/O for 2 seconds... 00:14:29.045 [2024-04-17 15:37:30.348167] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.045 [2024-04-17 15:37:30.348239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.045 [2024-04-17 15:37:30.348273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.045 [2024-04-17 15:37:30.365367] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.045 [2024-04-17 15:37:30.365438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.045 [2024-04-17 15:37:30.365471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.045 [2024-04-17 15:37:30.383078] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.045 [2024-04-17 15:37:30.383160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.045 [2024-04-17 15:37:30.383178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.045 [2024-04-17 15:37:30.400742] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.045 [2024-04-17 15:37:30.400830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.045 [2024-04-17 15:37:30.400865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.045 [2024-04-17 15:37:30.417935] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.045 [2024-04-17 15:37:30.417975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.045 [2024-04-17 15:37:30.418006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.045 [2024-04-17 15:37:30.435104] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.045 [2024-04-17 15:37:30.435160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.045 [2024-04-17 15:37:30.435176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.045 [2024-04-17 15:37:30.451888] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.045 [2024-04-17 15:37:30.451947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.045 [2024-04-17 15:37:30.451986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.045 [2024-04-17 15:37:30.468974] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.045 [2024-04-17 15:37:30.469034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.045 [2024-04-17 15:37:30.469065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.045 [2024-04-17 15:37:30.485810] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.045 [2024-04-17 15:37:30.485874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.045 [2024-04-17 15:37:30.485890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.351 [2024-04-17 15:37:30.503021] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.351 [2024-04-17 15:37:30.503063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.351 [2024-04-17 15:37:30.503078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.351 [2024-04-17 15:37:30.520074] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.351 [2024-04-17 15:37:30.520116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.351 [2024-04-17 15:37:30.520145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.351 [2024-04-17 15:37:30.537218] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.351 [2024-04-17 15:37:30.537294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.351 [2024-04-17 15:37:30.537324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.351 [2024-04-17 15:37:30.553636] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.351 [2024-04-17 15:37:30.553682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.351 [2024-04-17 15:37:30.553713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.351 [2024-04-17 15:37:30.569624] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.351 [2024-04-17 15:37:30.569668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.351 [2024-04-17 15:37:30.569698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.351 [2024-04-17 15:37:30.585872] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.351 [2024-04-17 15:37:30.585928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.351 [2024-04-17 15:37:30.585957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.351 [2024-04-17 15:37:30.602192] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.351 [2024-04-17 15:37:30.602266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.351 [2024-04-17 15:37:30.602298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.351 [2024-04-17 15:37:30.619571] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.351 [2024-04-17 15:37:30.619616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.351 [2024-04-17 15:37:30.619630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.351 [2024-04-17 15:37:30.637048] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.351 [2024-04-17 15:37:30.637090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.351 [2024-04-17 15:37:30.637105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.351 [2024-04-17 15:37:30.654633] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.351 [2024-04-17 15:37:30.654705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.351 [2024-04-17 15:37:30.654738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.351 [2024-04-17 15:37:30.671327] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.351 [2024-04-17 15:37:30.671364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.351 [2024-04-17 15:37:30.671394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.352 [2024-04-17 15:37:30.688570] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.352 [2024-04-17 15:37:30.688605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.352 [2024-04-17 15:37:30.688635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.352 [2024-04-17 15:37:30.706384] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.352 [2024-04-17 15:37:30.706451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.352 [2024-04-17 15:37:30.706481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.352 [2024-04-17 15:37:30.724191] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.352 [2024-04-17 15:37:30.724232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.352 [2024-04-17 15:37:30.724247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.352 [2024-04-17 15:37:30.741544] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.352 [2024-04-17 15:37:30.741606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.352 [2024-04-17 15:37:30.741621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.352 [2024-04-17 15:37:30.758893] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.352 [2024-04-17 15:37:30.758948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.352 [2024-04-17 15:37:30.758964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.352 [2024-04-17 15:37:30.776497] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.352 [2024-04-17 15:37:30.776561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.352 [2024-04-17 15:37:30.776577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.613 [2024-04-17 15:37:30.794050] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.613 [2024-04-17 15:37:30.794101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.613 [2024-04-17 15:37:30.794117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.613 [2024-04-17 15:37:30.811476] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.613 [2024-04-17 15:37:30.811520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.613 [2024-04-17 15:37:30.811535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.613 [2024-04-17 15:37:30.828888] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.613 [2024-04-17 15:37:30.828933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.613 [2024-04-17 15:37:30.828948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.613 [2024-04-17 15:37:30.846482] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.613 [2024-04-17 15:37:30.846519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.613 [2024-04-17 15:37:30.846565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.613 [2024-04-17 15:37:30.864506] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.613 [2024-04-17 15:37:30.864544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.613 [2024-04-17 15:37:30.864558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.613 [2024-04-17 15:37:30.882053] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.613 [2024-04-17 15:37:30.882092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.613 [2024-04-17 15:37:30.882106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.613 [2024-04-17 15:37:30.899229] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.614 [2024-04-17 15:37:30.899273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.614 [2024-04-17 15:37:30.899288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.614 [2024-04-17 15:37:30.916486] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.614 [2024-04-17 15:37:30.916535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.614 [2024-04-17 15:37:30.916549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.614 [2024-04-17 15:37:30.933743] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.614 [2024-04-17 15:37:30.933799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.614 [2024-04-17 15:37:30.933815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.614 [2024-04-17 15:37:30.951055] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.614 [2024-04-17 15:37:30.951126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.614 [2024-04-17 15:37:30.951142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.614 [2024-04-17 15:37:30.968583] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.614 [2024-04-17 15:37:30.968657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.614 [2024-04-17 15:37:30.968672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.614 [2024-04-17 15:37:30.985818] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.614 [2024-04-17 15:37:30.985868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.614 [2024-04-17 15:37:30.985883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.614 [2024-04-17 15:37:31.003001] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.614 [2024-04-17 15:37:31.003066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.614 [2024-04-17 15:37:31.003081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.614 [2024-04-17 15:37:31.020434] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.614 [2024-04-17 15:37:31.020477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.614 [2024-04-17 15:37:31.020492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.614 [2024-04-17 15:37:31.037801] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.614 [2024-04-17 15:37:31.037841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.614 [2024-04-17 15:37:31.037856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.614 [2024-04-17 15:37:31.055215] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.614 [2024-04-17 15:37:31.055254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.614 [2024-04-17 15:37:31.055270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.874 [2024-04-17 15:37:31.072905] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.874 [2024-04-17 15:37:31.072940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.874 [2024-04-17 15:37:31.072970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.874 [2024-04-17 15:37:31.090020] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.874 [2024-04-17 15:37:31.090056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.874 [2024-04-17 15:37:31.090085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.874 [2024-04-17 15:37:31.106350] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.874 [2024-04-17 15:37:31.106385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.874 [2024-04-17 15:37:31.106413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.874 [2024-04-17 15:37:31.123620] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.874 [2024-04-17 15:37:31.123656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.874 [2024-04-17 15:37:31.123685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.874 [2024-04-17 15:37:31.140607] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.874 [2024-04-17 15:37:31.140645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.874 [2024-04-17 15:37:31.140661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.874 [2024-04-17 15:37:31.157943] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.874 [2024-04-17 15:37:31.157982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.874 [2024-04-17 15:37:31.157997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.874 [2024-04-17 15:37:31.175188] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.874 [2024-04-17 15:37:31.175229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.874 [2024-04-17 15:37:31.175244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.874 [2024-04-17 15:37:31.192875] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.874 [2024-04-17 15:37:31.192912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.874 [2024-04-17 15:37:31.192925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.874 [2024-04-17 15:37:31.209234] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.874 [2024-04-17 15:37:31.209271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.874 [2024-04-17 15:37:31.209300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.874 [2024-04-17 15:37:31.225790] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.874 [2024-04-17 15:37:31.225838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.874 [2024-04-17 15:37:31.225870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.874 [2024-04-17 15:37:31.243152] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.874 [2024-04-17 15:37:31.243194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.874 [2024-04-17 15:37:31.243207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.874 [2024-04-17 15:37:31.260799] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.874 [2024-04-17 15:37:31.260845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.874 [2024-04-17 15:37:31.260860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.874 [2024-04-17 15:37:31.278353] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.874 [2024-04-17 15:37:31.278394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.874 [2024-04-17 15:37:31.278408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.874 [2024-04-17 15:37:31.294810] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.874 [2024-04-17 15:37:31.294873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.874 [2024-04-17 15:37:31.294905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:29.874 [2024-04-17 15:37:31.310979] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:29.874 [2024-04-17 15:37:31.311043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.874 [2024-04-17 15:37:31.311057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.134 [2024-04-17 15:37:31.326603] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.134 [2024-04-17 15:37:31.326658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.134 [2024-04-17 15:37:31.326672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.134 [2024-04-17 15:37:31.342811] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.134 [2024-04-17 15:37:31.342862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.134 [2024-04-17 15:37:31.342892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.134 [2024-04-17 15:37:31.358709] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.134 [2024-04-17 15:37:31.358784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.134 [2024-04-17 15:37:31.358799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.134 [2024-04-17 15:37:31.374322] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.134 [2024-04-17 15:37:31.374373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.134 [2024-04-17 15:37:31.374402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.134 [2024-04-17 15:37:31.389673] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.134 [2024-04-17 15:37:31.389723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.134 [2024-04-17 15:37:31.389752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.134 [2024-04-17 15:37:31.405280] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.134 [2024-04-17 15:37:31.405331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.134 [2024-04-17 15:37:31.405360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.134 [2024-04-17 15:37:31.428344] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.134 [2024-04-17 15:37:31.428396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.134 [2024-04-17 15:37:31.428425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.134 [2024-04-17 15:37:31.444450] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.134 [2024-04-17 15:37:31.444505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.134 [2024-04-17 15:37:31.444534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.134 [2024-04-17 15:37:31.461913] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.134 [2024-04-17 15:37:31.461954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.134 [2024-04-17 15:37:31.461967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.134 [2024-04-17 15:37:31.478366] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.134 [2024-04-17 15:37:31.478426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.134 [2024-04-17 15:37:31.478454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.134 [2024-04-17 15:37:31.493988] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.134 [2024-04-17 15:37:31.494042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:10104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.134 [2024-04-17 15:37:31.494054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.134 [2024-04-17 15:37:31.509369] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.134 [2024-04-17 15:37:31.509419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.134 [2024-04-17 15:37:31.509447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.134 [2024-04-17 15:37:31.524983] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.134 [2024-04-17 15:37:31.525034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:13524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.134 [2024-04-17 15:37:31.525062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.134 [2024-04-17 15:37:31.540359] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.134 [2024-04-17 15:37:31.540410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.134 [2024-04-17 15:37:31.540438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.134 [2024-04-17 15:37:31.555502] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.134 [2024-04-17 15:37:31.555554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.134 [2024-04-17 15:37:31.555582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.134 [2024-04-17 15:37:31.570846] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.134 [2024-04-17 15:37:31.570901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.134 [2024-04-17 15:37:31.570915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.394 [2024-04-17 15:37:31.585969] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.394 [2024-04-17 15:37:31.586019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.394 [2024-04-17 15:37:31.586047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.394 [2024-04-17 15:37:31.601171] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.394 [2024-04-17 15:37:31.601220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.394 [2024-04-17 15:37:31.601249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.394 [2024-04-17 15:37:31.616411] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.394 [2024-04-17 15:37:31.616461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:19191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.394 [2024-04-17 15:37:31.616489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.394 [2024-04-17 15:37:31.631637] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.394 [2024-04-17 15:37:31.631679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.394 [2024-04-17 15:37:31.631692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.394 [2024-04-17 15:37:31.648308] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.394 [2024-04-17 15:37:31.648349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.394 [2024-04-17 15:37:31.648362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.394 [2024-04-17 15:37:31.665533] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.395 [2024-04-17 15:37:31.665577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.395 [2024-04-17 15:37:31.665591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.395 [2024-04-17 15:37:31.682629] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.395 [2024-04-17 15:37:31.682668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.395 [2024-04-17 15:37:31.682681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.395 [2024-04-17 15:37:31.699404] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.395 [2024-04-17 15:37:31.699456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.395 [2024-04-17 15:37:31.699485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.395 [2024-04-17 15:37:31.715498] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.395 [2024-04-17 15:37:31.715551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.395 [2024-04-17 15:37:31.715579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.395 [2024-04-17 15:37:31.731289] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.395 [2024-04-17 15:37:31.731355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.395 [2024-04-17 15:37:31.731384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.395 [2024-04-17 15:37:31.747261] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.395 [2024-04-17 15:37:31.747343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.395 [2024-04-17 15:37:31.747372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.395 [2024-04-17 15:37:31.763128] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.395 [2024-04-17 15:37:31.763181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.395 [2024-04-17 15:37:31.763194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.395 [2024-04-17 15:37:31.779140] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.395 [2024-04-17 15:37:31.779209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.395 [2024-04-17 15:37:31.779222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.395 [2024-04-17 15:37:31.794933] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.395 [2024-04-17 15:37:31.795011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.395 [2024-04-17 15:37:31.795026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.395 [2024-04-17 15:37:31.810616] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.395 [2024-04-17 15:37:31.810668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.395 [2024-04-17 15:37:31.810697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.395 [2024-04-17 15:37:31.826311] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.395 [2024-04-17 15:37:31.826363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.395 [2024-04-17 15:37:31.826391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.654 [2024-04-17 15:37:31.842003] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.654 [2024-04-17 15:37:31.842055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.654 [2024-04-17 15:37:31.842068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.654 [2024-04-17 15:37:31.857817] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.654 [2024-04-17 15:37:31.857868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.654 [2024-04-17 15:37:31.857897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.654 [2024-04-17 15:37:31.874531] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.654 [2024-04-17 15:37:31.874601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.654 [2024-04-17 15:37:31.874614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.654 [2024-04-17 15:37:31.891483] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.654 [2024-04-17 15:37:31.891536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.654 [2024-04-17 15:37:31.891565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.654 [2024-04-17 15:37:31.908058] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.654 [2024-04-17 15:37:31.908109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.654 [2024-04-17 15:37:31.908138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.654 [2024-04-17 15:37:31.924407] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.654 [2024-04-17 15:37:31.924463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.654 [2024-04-17 15:37:31.924492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.654 [2024-04-17 15:37:31.941543] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.654 [2024-04-17 15:37:31.941612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.654 [2024-04-17 15:37:31.941642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.654 [2024-04-17 15:37:31.959448] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.654 [2024-04-17 15:37:31.959503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.654 [2024-04-17 15:37:31.959532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.654 [2024-04-17 15:37:31.977024] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.654 [2024-04-17 15:37:31.977064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.654 [2024-04-17 15:37:31.977078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.654 [2024-04-17 15:37:31.994354] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.654 [2024-04-17 15:37:31.994399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.654 [2024-04-17 15:37:31.994413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.654 [2024-04-17 15:37:32.012097] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.654 [2024-04-17 15:37:32.012140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:25193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.654 [2024-04-17 15:37:32.012154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.654 [2024-04-17 15:37:32.029736] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.654 [2024-04-17 15:37:32.029785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.654 [2024-04-17 15:37:32.029799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.654 [2024-04-17 15:37:32.047174] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.654 [2024-04-17 15:37:32.047219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.654 [2024-04-17 15:37:32.047234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.654 [2024-04-17 15:37:32.064419] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.654 [2024-04-17 15:37:32.064466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.654 [2024-04-17 15:37:32.064481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.654 [2024-04-17 15:37:32.081792] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.654 [2024-04-17 15:37:32.081855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.654 [2024-04-17 15:37:32.081869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.912 [2024-04-17 15:37:32.099368] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.912 [2024-04-17 15:37:32.099422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.912 [2024-04-17 15:37:32.099436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.912 [2024-04-17 15:37:32.116647] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.912 [2024-04-17 15:37:32.116687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.912 [2024-04-17 15:37:32.116701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.912 [2024-04-17 15:37:32.134440] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.912 [2024-04-17 15:37:32.134480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.912 [2024-04-17 15:37:32.134493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.912 [2024-04-17 15:37:32.151849] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.912 [2024-04-17 15:37:32.151905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.912 [2024-04-17 15:37:32.151920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.912 [2024-04-17 15:37:32.169049] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.912 [2024-04-17 15:37:32.169107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.912 [2024-04-17 15:37:32.169120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.912 [2024-04-17 15:37:32.186371] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.912 [2024-04-17 15:37:32.186415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.912 [2024-04-17 15:37:32.186430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.912 [2024-04-17 15:37:32.203548] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.912 [2024-04-17 15:37:32.203588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.912 [2024-04-17 15:37:32.203601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.912 [2024-04-17 15:37:32.220641] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.912 [2024-04-17 15:37:32.220683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.912 [2024-04-17 15:37:32.220697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.912 [2024-04-17 15:37:32.237691] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.912 [2024-04-17 15:37:32.237730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.912 [2024-04-17 15:37:32.237743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.912 [2024-04-17 15:37:32.254758] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.912 [2024-04-17 15:37:32.254806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.912 [2024-04-17 15:37:32.254820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.912 [2024-04-17 15:37:32.271992] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.912 [2024-04-17 15:37:32.272033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.912 [2024-04-17 15:37:32.272047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.912 [2024-04-17 15:37:32.289530] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.912 [2024-04-17 15:37:32.289567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.912 [2024-04-17 15:37:32.289596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.912 [2024-04-17 15:37:32.306732] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.912 [2024-04-17 15:37:32.306786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.912 [2024-04-17 15:37:32.306802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.913 [2024-04-17 15:37:32.323775] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e460) 00:14:30.913 [2024-04-17 15:37:32.323841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.913 [2024-04-17 15:37:32.323857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:30.913 00:14:30.913 Latency(us) 00:14:30.913 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:30.913 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:14:30.913 nvme0n1 : 2.01 15006.03 58.62 0.00 0.00 8523.01 7328.12 30980.65 00:14:30.913 =================================================================================================================== 00:14:30.913 Total : 15006.03 58.62 0.00 0.00 8523.01 7328.12 30980.65 00:14:30.913 0 00:14:30.913 15:37:32 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:14:30.913 15:37:32 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:14:30.913 15:37:32 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:14:30.913 15:37:32 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:14:30.913 | .driver_specific 00:14:30.913 | .nvme_error 00:14:30.913 | .status_code 00:14:30.913 | .command_transient_transport_error' 00:14:31.480 15:37:32 -- host/digest.sh@71 -- # (( 118 > 0 )) 00:14:31.480 15:37:32 -- host/digest.sh@73 -- # killprocess 76599 00:14:31.480 15:37:32 -- common/autotest_common.sh@936 -- # '[' -z 76599 ']' 00:14:31.480 15:37:32 -- common/autotest_common.sh@940 -- # kill -0 76599 00:14:31.480 15:37:32 -- common/autotest_common.sh@941 -- # uname 00:14:31.480 15:37:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:31.480 15:37:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76599 00:14:31.480 15:37:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:31.480 15:37:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:31.480 killing process with pid 76599 00:14:31.480 15:37:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76599' 00:14:31.480 Received shutdown signal, test time was about 2.000000 seconds 00:14:31.480 00:14:31.480 Latency(us) 00:14:31.480 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:31.480 =================================================================================================================== 00:14:31.480 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:31.480 15:37:32 -- common/autotest_common.sh@955 -- # kill 76599 00:14:31.480 15:37:32 -- common/autotest_common.sh@960 -- # wait 76599 00:14:31.738 15:37:33 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:14:31.738 15:37:33 -- host/digest.sh@54 -- # local rw bs qd 00:14:31.738 15:37:33 -- host/digest.sh@56 -- # rw=randread 00:14:31.738 15:37:33 -- host/digest.sh@56 -- # bs=131072 00:14:31.738 15:37:33 -- host/digest.sh@56 -- # qd=16 00:14:31.738 15:37:33 -- host/digest.sh@58 -- # bperfpid=76665 00:14:31.738 15:37:33 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:14:31.738 15:37:33 -- host/digest.sh@60 -- # waitforlisten 76665 /var/tmp/bperf.sock 00:14:31.738 15:37:33 -- common/autotest_common.sh@817 -- # '[' -z 76665 ']' 00:14:31.738 15:37:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:14:31.738 15:37:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:31.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:14:31.738 15:37:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:14:31.738 15:37:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:31.738 15:37:33 -- common/autotest_common.sh@10 -- # set +x 00:14:31.738 [2024-04-17 15:37:33.071438] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:14:31.738 [2024-04-17 15:37:33.071528] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76665 ] 00:14:31.738 I/O size of 131072 is greater than zero copy threshold (65536). 00:14:31.738 Zero copy mechanism will not be used. 00:14:31.996 [2024-04-17 15:37:33.206732] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.996 [2024-04-17 15:37:33.346229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:32.929 15:37:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:32.929 15:37:34 -- common/autotest_common.sh@850 -- # return 0 00:14:32.929 15:37:34 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:14:32.929 15:37:34 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:14:32.929 15:37:34 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:14:32.929 15:37:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:32.929 15:37:34 -- common/autotest_common.sh@10 -- # set +x 00:14:32.929 15:37:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:32.929 15:37:34 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:14:32.929 15:37:34 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:14:33.186 nvme0n1 00:14:33.186 15:37:34 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:14:33.186 15:37:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:33.186 15:37:34 -- common/autotest_common.sh@10 -- # set +x 00:14:33.186 15:37:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:33.186 15:37:34 -- host/digest.sh@69 -- # bperf_py perform_tests 00:14:33.186 15:37:34 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:14:33.443 I/O size of 131072 is greater than zero copy threshold (65536). 00:14:33.443 Zero copy mechanism will not be used. 00:14:33.443 Running I/O for 2 seconds... 00:14:33.443 [2024-04-17 15:37:34.768644] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.443 [2024-04-17 15:37:34.768725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.443 [2024-04-17 15:37:34.768744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:33.443 [2024-04-17 15:37:34.772975] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.443 [2024-04-17 15:37:34.773016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.443 [2024-04-17 15:37:34.773031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:33.443 [2024-04-17 15:37:34.777264] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.443 [2024-04-17 15:37:34.777305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.443 [2024-04-17 15:37:34.777320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:33.443 [2024-04-17 15:37:34.781605] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.443 [2024-04-17 15:37:34.781649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.443 [2024-04-17 15:37:34.781664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:33.443 [2024-04-17 15:37:34.785967] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.443 [2024-04-17 15:37:34.786024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.443 [2024-04-17 15:37:34.786038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:33.443 [2024-04-17 15:37:34.790313] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.443 [2024-04-17 15:37:34.790355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.443 [2024-04-17 15:37:34.790370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:33.443 [2024-04-17 15:37:34.794673] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.443 [2024-04-17 15:37:34.794714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.443 [2024-04-17 15:37:34.794730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:33.443 [2024-04-17 15:37:34.799046] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.443 [2024-04-17 15:37:34.799088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.443 [2024-04-17 15:37:34.799102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:33.443 [2024-04-17 15:37:34.803372] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.443 [2024-04-17 15:37:34.803415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.443 [2024-04-17 15:37:34.803430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:33.443 [2024-04-17 15:37:34.807657] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.443 [2024-04-17 15:37:34.807699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.443 [2024-04-17 15:37:34.807714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:33.443 [2024-04-17 15:37:34.812023] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.443 [2024-04-17 15:37:34.812065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.443 [2024-04-17 15:37:34.812079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:33.444 [2024-04-17 15:37:34.816315] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.444 [2024-04-17 15:37:34.816358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.444 [2024-04-17 15:37:34.816372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:33.444 [2024-04-17 15:37:34.820700] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.444 [2024-04-17 15:37:34.820782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.444 [2024-04-17 15:37:34.820797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:33.444 [2024-04-17 15:37:34.825147] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.444 [2024-04-17 15:37:34.825201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.444 [2024-04-17 15:37:34.825216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:33.444 [2024-04-17 15:37:34.829640] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.444 [2024-04-17 15:37:34.829694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.444 [2024-04-17 15:37:34.829724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:33.444 [2024-04-17 15:37:34.834120] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.444 [2024-04-17 15:37:34.834176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.444 [2024-04-17 15:37:34.834190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:33.444 [2024-04-17 15:37:34.838480] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.444 [2024-04-17 15:37:34.838536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.444 [2024-04-17 15:37:34.838551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:33.444 [2024-04-17 15:37:34.842947] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.444 [2024-04-17 15:37:34.843009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.444 [2024-04-17 15:37:34.843025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:33.444 [2024-04-17 15:37:34.847206] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.444 [2024-04-17 15:37:34.847247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.444 [2024-04-17 15:37:34.847262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:33.444 [2024-04-17 15:37:34.851485] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.444 [2024-04-17 15:37:34.851538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.444 [2024-04-17 15:37:34.851552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:33.444 [2024-04-17 15:37:34.855932] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.444 [2024-04-17 15:37:34.855989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.444 [2024-04-17 15:37:34.856004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:33.444 [2024-04-17 15:37:34.860218] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.444 [2024-04-17 15:37:34.860274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.444 [2024-04-17 15:37:34.860288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:33.444 [2024-04-17 15:37:34.864551] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.444 [2024-04-17 15:37:34.864608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.444 [2024-04-17 15:37:34.864622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:33.444 [2024-04-17 15:37:34.868941] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.444 [2024-04-17 15:37:34.868994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.444 [2024-04-17 15:37:34.869024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:33.444 [2024-04-17 15:37:34.873223] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.444 [2024-04-17 15:37:34.873278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.444 [2024-04-17 15:37:34.873308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:33.444 [2024-04-17 15:37:34.877530] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.444 [2024-04-17 15:37:34.877588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.444 [2024-04-17 15:37:34.877602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:33.444 [2024-04-17 15:37:34.881887] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.444 [2024-04-17 15:37:34.881943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.444 [2024-04-17 15:37:34.881972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:33.702 [2024-04-17 15:37:34.886088] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.702 [2024-04-17 15:37:34.886143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.702 [2024-04-17 15:37:34.886173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:33.702 [2024-04-17 15:37:34.890615] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.702 [2024-04-17 15:37:34.890655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.702 [2024-04-17 15:37:34.890669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:33.702 [2024-04-17 15:37:34.894988] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.702 [2024-04-17 15:37:34.895046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.702 [2024-04-17 15:37:34.895060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:33.702 [2024-04-17 15:37:34.899201] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.702 [2024-04-17 15:37:34.899238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.702 [2024-04-17 15:37:34.899253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:33.702 [2024-04-17 15:37:34.903511] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.702 [2024-04-17 15:37:34.903567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.702 [2024-04-17 15:37:34.903598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:33.702 [2024-04-17 15:37:34.907832] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.702 [2024-04-17 15:37:34.907887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.702 [2024-04-17 15:37:34.907901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:33.702 [2024-04-17 15:37:34.912238] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.702 [2024-04-17 15:37:34.912280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.702 [2024-04-17 15:37:34.912295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:33.702 [2024-04-17 15:37:34.916515] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.702 [2024-04-17 15:37:34.916554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.702 [2024-04-17 15:37:34.916568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:33.702 [2024-04-17 15:37:34.920850] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.702 [2024-04-17 15:37:34.920905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.703 [2024-04-17 15:37:34.920919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:33.703 [2024-04-17 15:37:34.925225] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.703 [2024-04-17 15:37:34.925283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.703 [2024-04-17 15:37:34.925298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:33.703 [2024-04-17 15:37:34.929531] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.703 [2024-04-17 15:37:34.929588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.703 [2024-04-17 15:37:34.929617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:33.703 [2024-04-17 15:37:34.933854] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.703 [2024-04-17 15:37:34.933908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.703 [2024-04-17 15:37:34.933922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:33.703 [2024-04-17 15:37:34.938151] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.703 [2024-04-17 15:37:34.938205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.703 [2024-04-17 15:37:34.938220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:33.703 [2024-04-17 15:37:34.942366] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.703 [2024-04-17 15:37:34.942419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.703 [2024-04-17 15:37:34.942448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:33.703 [2024-04-17 15:37:34.946894] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.703 [2024-04-17 15:37:34.946945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.703 [2024-04-17 15:37:34.946973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:33.703 [2024-04-17 15:37:34.951270] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.703 [2024-04-17 15:37:34.951309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.703 [2024-04-17 15:37:34.951324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:33.703 [2024-04-17 15:37:34.955638] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.703 [2024-04-17 15:37:34.955694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.703 [2024-04-17 15:37:34.955708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:33.703 [2024-04-17 15:37:34.959942] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.703 [2024-04-17 15:37:34.959994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.703 [2024-04-17 15:37:34.960023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:33.703 [2024-04-17 15:37:34.964539] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.703 [2024-04-17 15:37:34.964604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.703 [2024-04-17 15:37:34.964618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:33.703 [2024-04-17 15:37:34.968926] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.703 [2024-04-17 15:37:34.968997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.703 [2024-04-17 15:37:34.969011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:33.703 [2024-04-17 15:37:34.973281] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.703 [2024-04-17 15:37:34.973334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.703 [2024-04-17 15:37:34.973379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:33.703 [2024-04-17 15:37:34.977769] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.703 [2024-04-17 15:37:34.977819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.703 [2024-04-17 15:37:34.977834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:33.703 [2024-04-17 15:37:34.982153] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.703 [2024-04-17 15:37:34.982205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.703 [2024-04-17 15:37:34.982235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:33.703 [2024-04-17 15:37:34.986359] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.703 [2024-04-17 15:37:34.986399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.703 [2024-04-17 15:37:34.986413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:33.703 [2024-04-17 15:37:34.990631] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.703 [2024-04-17 15:37:34.990687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.703 [2024-04-17 15:37:34.990701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:33.703 [2024-04-17 15:37:34.995047] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.703 [2024-04-17 15:37:34.995087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.703 [2024-04-17 15:37:34.995101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:33.703 [2024-04-17 15:37:34.999216] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.703 [2024-04-17 15:37:34.999256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.703 [2024-04-17 15:37:34.999271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:33.703 [2024-04-17 15:37:35.003372] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.703 [2024-04-17 15:37:35.003411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.703 [2024-04-17 15:37:35.003424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:33.703 [2024-04-17 15:37:35.007576] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.703 [2024-04-17 15:37:35.007614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.703 [2024-04-17 15:37:35.007628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:33.703 [2024-04-17 15:37:35.011785] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.703 [2024-04-17 15:37:35.011823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.703 [2024-04-17 15:37:35.011836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:33.703 [2024-04-17 15:37:35.015952] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.703 [2024-04-17 15:37:35.015991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.703 [2024-04-17 15:37:35.016005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:33.703 [2024-04-17 15:37:35.020353] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.703 [2024-04-17 15:37:35.020408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.703 [2024-04-17 15:37:35.020422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:33.703 [2024-04-17 15:37:35.024803] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.703 [2024-04-17 15:37:35.024847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.703 [2024-04-17 15:37:35.024861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:33.703 [2024-04-17 15:37:35.029168] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.703 [2024-04-17 15:37:35.029221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.704 [2024-04-17 15:37:35.029235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:33.704 [2024-04-17 15:37:35.033476] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.704 [2024-04-17 15:37:35.033516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.704 [2024-04-17 15:37:35.033530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:33.704 [2024-04-17 15:37:35.037784] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.704 [2024-04-17 15:37:35.037848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.704 [2024-04-17 15:37:35.037862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:33.704 [2024-04-17 15:37:35.042141] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.704 [2024-04-17 15:37:35.042196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.704 [2024-04-17 15:37:35.042210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:33.704 [2024-04-17 15:37:35.046331] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.704 [2024-04-17 15:37:35.046385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.704 [2024-04-17 15:37:35.046399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:33.704 [2024-04-17 15:37:35.050852] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.704 [2024-04-17 15:37:35.050890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.704 [2024-04-17 15:37:35.050905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:33.704 [2024-04-17 15:37:35.055364] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.704 [2024-04-17 15:37:35.055410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.704 [2024-04-17 15:37:35.055423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:33.704 [2024-04-17 15:37:35.059769] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.704 [2024-04-17 15:37:35.059819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.704 [2024-04-17 15:37:35.059834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:33.704 [2024-04-17 15:37:35.064096] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.704 [2024-04-17 15:37:35.064150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.704 [2024-04-17 15:37:35.064164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:33.704 [2024-04-17 15:37:35.068592] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.704 [2024-04-17 15:37:35.068635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.704 [2024-04-17 15:37:35.068649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:33.704 [2024-04-17 15:37:35.072990] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.704 [2024-04-17 15:37:35.073057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.704 [2024-04-17 15:37:35.073087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:33.704 [2024-04-17 15:37:35.077304] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.704 [2024-04-17 15:37:35.077344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.704 [2024-04-17 15:37:35.077358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:33.704 [2024-04-17 15:37:35.081519] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.704 [2024-04-17 15:37:35.081574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.704 [2024-04-17 15:37:35.081603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:33.704 [2024-04-17 15:37:35.085893] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.704 [2024-04-17 15:37:35.085958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.704 [2024-04-17 15:37:35.085987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:33.704 [2024-04-17 15:37:35.090252] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.704 [2024-04-17 15:37:35.090308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.704 [2024-04-17 15:37:35.090337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:33.704 [2024-04-17 15:37:35.094633] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.704 [2024-04-17 15:37:35.094675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.704 [2024-04-17 15:37:35.094689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:33.704 [2024-04-17 15:37:35.099143] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.704 [2024-04-17 15:37:35.099182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.704 [2024-04-17 15:37:35.099196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:33.704 [2024-04-17 15:37:35.103473] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.704 [2024-04-17 15:37:35.103543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.704 [2024-04-17 15:37:35.103573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:33.704 [2024-04-17 15:37:35.107864] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.704 [2024-04-17 15:37:35.107917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.704 [2024-04-17 15:37:35.107931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:33.704 [2024-04-17 15:37:35.112205] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.704 [2024-04-17 15:37:35.112244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.704 [2024-04-17 15:37:35.112259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:33.704 [2024-04-17 15:37:35.116576] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.704 [2024-04-17 15:37:35.116630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.704 [2024-04-17 15:37:35.116644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:33.704 [2024-04-17 15:37:35.120985] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.704 [2024-04-17 15:37:35.121056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.704 [2024-04-17 15:37:35.121070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:33.704 [2024-04-17 15:37:35.125522] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.704 [2024-04-17 15:37:35.125576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.704 [2024-04-17 15:37:35.125605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:33.704 [2024-04-17 15:37:35.129857] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.704 [2024-04-17 15:37:35.129909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.704 [2024-04-17 15:37:35.129938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:33.704 [2024-04-17 15:37:35.134284] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.704 [2024-04-17 15:37:35.134322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.704 [2024-04-17 15:37:35.134337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:33.704 [2024-04-17 15:37:35.138671] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.704 [2024-04-17 15:37:35.138711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.704 [2024-04-17 15:37:35.138725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:33.704 [2024-04-17 15:37:35.142857] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.704 [2024-04-17 15:37:35.142894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.704 [2024-04-17 15:37:35.142908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:33.963 [2024-04-17 15:37:35.147092] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.963 [2024-04-17 15:37:35.147131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.963 [2024-04-17 15:37:35.147145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:33.963 [2024-04-17 15:37:35.151362] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.963 [2024-04-17 15:37:35.151403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.963 [2024-04-17 15:37:35.151417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:33.963 [2024-04-17 15:37:35.155782] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.963 [2024-04-17 15:37:35.155818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.963 [2024-04-17 15:37:35.155832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:33.963 [2024-04-17 15:37:35.160040] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.963 [2024-04-17 15:37:35.160080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.963 [2024-04-17 15:37:35.160093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:33.963 [2024-04-17 15:37:35.164490] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.963 [2024-04-17 15:37:35.164544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.963 [2024-04-17 15:37:35.164574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:33.963 [2024-04-17 15:37:35.169103] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.963 [2024-04-17 15:37:35.169142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.963 [2024-04-17 15:37:35.169157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:33.963 [2024-04-17 15:37:35.173547] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.963 [2024-04-17 15:37:35.173604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.963 [2024-04-17 15:37:35.173634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:33.963 [2024-04-17 15:37:35.177969] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.963 [2024-04-17 15:37:35.178040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.963 [2024-04-17 15:37:35.178054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:33.963 [2024-04-17 15:37:35.182348] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.963 [2024-04-17 15:37:35.182389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.963 [2024-04-17 15:37:35.182403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:33.963 [2024-04-17 15:37:35.186662] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.963 [2024-04-17 15:37:35.186701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.963 [2024-04-17 15:37:35.186715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:33.963 [2024-04-17 15:37:35.191106] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.963 [2024-04-17 15:37:35.191145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.963 [2024-04-17 15:37:35.191158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:33.963 [2024-04-17 15:37:35.195377] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.963 [2024-04-17 15:37:35.195433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.963 [2024-04-17 15:37:35.195463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:33.963 [2024-04-17 15:37:35.199855] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.963 [2024-04-17 15:37:35.199896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.963 [2024-04-17 15:37:35.199910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:33.963 [2024-04-17 15:37:35.204148] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.963 [2024-04-17 15:37:35.204203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.963 [2024-04-17 15:37:35.204233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:33.963 [2024-04-17 15:37:35.208563] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.963 [2024-04-17 15:37:35.208636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.963 [2024-04-17 15:37:35.208650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:33.963 [2024-04-17 15:37:35.213003] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.963 [2024-04-17 15:37:35.213089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.964 [2024-04-17 15:37:35.213104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:33.964 [2024-04-17 15:37:35.217494] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.964 [2024-04-17 15:37:35.217551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.964 [2024-04-17 15:37:35.217580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:33.964 [2024-04-17 15:37:35.221844] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.964 [2024-04-17 15:37:35.221899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.964 [2024-04-17 15:37:35.221928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:33.964 [2024-04-17 15:37:35.226281] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.964 [2024-04-17 15:37:35.226334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.964 [2024-04-17 15:37:35.226363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:33.964 [2024-04-17 15:37:35.230749] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.964 [2024-04-17 15:37:35.230800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.964 [2024-04-17 15:37:35.230815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:33.964 [2024-04-17 15:37:35.235067] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.964 [2024-04-17 15:37:35.235110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.964 [2024-04-17 15:37:35.235124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:33.964 [2024-04-17 15:37:35.239327] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.964 [2024-04-17 15:37:35.239381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.964 [2024-04-17 15:37:35.239411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:33.964 [2024-04-17 15:37:35.243928] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.964 [2024-04-17 15:37:35.243969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.964 [2024-04-17 15:37:35.243982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:33.964 [2024-04-17 15:37:35.248285] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.964 [2024-04-17 15:37:35.248340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.964 [2024-04-17 15:37:35.248356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:33.964 [2024-04-17 15:37:35.252639] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.964 [2024-04-17 15:37:35.252695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.964 [2024-04-17 15:37:35.252709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:33.964 [2024-04-17 15:37:35.257093] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.964 [2024-04-17 15:37:35.257134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.964 [2024-04-17 15:37:35.257148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:33.964 [2024-04-17 15:37:35.261320] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.964 [2024-04-17 15:37:35.261376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.964 [2024-04-17 15:37:35.261390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:33.964 [2024-04-17 15:37:35.265846] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.964 [2024-04-17 15:37:35.265899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.964 [2024-04-17 15:37:35.265913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:33.964 [2024-04-17 15:37:35.270171] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.964 [2024-04-17 15:37:35.270225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.964 [2024-04-17 15:37:35.270255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:33.964 [2024-04-17 15:37:35.274520] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.964 [2024-04-17 15:37:35.274578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.964 [2024-04-17 15:37:35.274608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:33.964 [2024-04-17 15:37:35.278864] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.964 [2024-04-17 15:37:35.278903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.964 [2024-04-17 15:37:35.278917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:33.964 [2024-04-17 15:37:35.283158] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.964 [2024-04-17 15:37:35.283196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.964 [2024-04-17 15:37:35.283210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:33.964 [2024-04-17 15:37:35.287509] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.964 [2024-04-17 15:37:35.287563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.964 [2024-04-17 15:37:35.287592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:33.964 [2024-04-17 15:37:35.291906] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.964 [2024-04-17 15:37:35.291944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.964 [2024-04-17 15:37:35.291958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:33.964 [2024-04-17 15:37:35.296289] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.964 [2024-04-17 15:37:35.296345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.964 [2024-04-17 15:37:35.296360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:33.964 [2024-04-17 15:37:35.300750] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.964 [2024-04-17 15:37:35.300800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.964 [2024-04-17 15:37:35.300815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:33.964 [2024-04-17 15:37:35.305032] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.964 [2024-04-17 15:37:35.305086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.964 [2024-04-17 15:37:35.305115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:33.964 [2024-04-17 15:37:35.309499] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.964 [2024-04-17 15:37:35.309570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.964 [2024-04-17 15:37:35.309584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:33.964 [2024-04-17 15:37:35.313998] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.964 [2024-04-17 15:37:35.314038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.964 [2024-04-17 15:37:35.314052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:33.964 [2024-04-17 15:37:35.318267] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.964 [2024-04-17 15:37:35.318307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.964 [2024-04-17 15:37:35.318322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:33.964 [2024-04-17 15:37:35.322440] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.964 [2024-04-17 15:37:35.322489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.964 [2024-04-17 15:37:35.322503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:33.964 [2024-04-17 15:37:35.326703] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.964 [2024-04-17 15:37:35.326741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.964 [2024-04-17 15:37:35.326767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:33.964 [2024-04-17 15:37:35.331100] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.964 [2024-04-17 15:37:35.331141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.965 [2024-04-17 15:37:35.331155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:33.965 [2024-04-17 15:37:35.335494] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.965 [2024-04-17 15:37:35.335550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.965 [2024-04-17 15:37:35.335564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:33.965 [2024-04-17 15:37:35.339783] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.965 [2024-04-17 15:37:35.339822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.965 [2024-04-17 15:37:35.339836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:33.965 [2024-04-17 15:37:35.344145] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.965 [2024-04-17 15:37:35.344185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.965 [2024-04-17 15:37:35.344199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:33.965 [2024-04-17 15:37:35.348538] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.965 [2024-04-17 15:37:35.348595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.965 [2024-04-17 15:37:35.348610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:33.965 [2024-04-17 15:37:35.353173] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.965 [2024-04-17 15:37:35.353230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.965 [2024-04-17 15:37:35.353243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:33.965 [2024-04-17 15:37:35.357711] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.965 [2024-04-17 15:37:35.357776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.965 [2024-04-17 15:37:35.357792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:33.965 [2024-04-17 15:37:35.362023] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.965 [2024-04-17 15:37:35.362092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.965 [2024-04-17 15:37:35.362106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:33.965 [2024-04-17 15:37:35.366410] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.965 [2024-04-17 15:37:35.366466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.965 [2024-04-17 15:37:35.366479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:33.965 [2024-04-17 15:37:35.370851] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.965 [2024-04-17 15:37:35.370892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.965 [2024-04-17 15:37:35.370906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:33.965 [2024-04-17 15:37:35.374926] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.965 [2024-04-17 15:37:35.375014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.965 [2024-04-17 15:37:35.375029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:33.965 [2024-04-17 15:37:35.379104] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.965 [2024-04-17 15:37:35.379142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.965 [2024-04-17 15:37:35.379157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:33.965 [2024-04-17 15:37:35.383170] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.965 [2024-04-17 15:37:35.383209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.965 [2024-04-17 15:37:35.383223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:33.965 [2024-04-17 15:37:35.387224] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.965 [2024-04-17 15:37:35.387264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.965 [2024-04-17 15:37:35.387277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:33.965 [2024-04-17 15:37:35.391689] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.965 [2024-04-17 15:37:35.391730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.965 [2024-04-17 15:37:35.391745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:33.965 [2024-04-17 15:37:35.395888] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.965 [2024-04-17 15:37:35.395941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.965 [2024-04-17 15:37:35.395972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:33.965 [2024-04-17 15:37:35.400428] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:33.965 [2024-04-17 15:37:35.400483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.965 [2024-04-17 15:37:35.400512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:34.225 [2024-04-17 15:37:35.404956] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.225 [2024-04-17 15:37:35.405026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.225 [2024-04-17 15:37:35.405057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:34.225 [2024-04-17 15:37:35.409379] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.225 [2024-04-17 15:37:35.409419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.225 [2024-04-17 15:37:35.409434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:34.225 [2024-04-17 15:37:35.413782] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.225 [2024-04-17 15:37:35.413833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.225 [2024-04-17 15:37:35.413848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:34.225 [2024-04-17 15:37:35.418310] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.225 [2024-04-17 15:37:35.418349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.225 [2024-04-17 15:37:35.418366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:34.225 [2024-04-17 15:37:35.422916] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.225 [2024-04-17 15:37:35.422957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.225 [2024-04-17 15:37:35.422972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:34.225 [2024-04-17 15:37:35.427233] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.225 [2024-04-17 15:37:35.427281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.225 [2024-04-17 15:37:35.427295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:34.225 [2024-04-17 15:37:35.431699] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.225 [2024-04-17 15:37:35.431742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.225 [2024-04-17 15:37:35.431769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:34.225 [2024-04-17 15:37:35.436021] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.225 [2024-04-17 15:37:35.436062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.225 [2024-04-17 15:37:35.436076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:34.225 [2024-04-17 15:37:35.440225] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.225 [2024-04-17 15:37:35.440266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.225 [2024-04-17 15:37:35.440281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:34.225 [2024-04-17 15:37:35.444353] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.225 [2024-04-17 15:37:35.444393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.225 [2024-04-17 15:37:35.444407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:34.225 [2024-04-17 15:37:35.448610] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.225 [2024-04-17 15:37:35.448650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.225 [2024-04-17 15:37:35.448664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:34.225 [2024-04-17 15:37:35.452822] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.225 [2024-04-17 15:37:35.452860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.225 [2024-04-17 15:37:35.452874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:34.225 [2024-04-17 15:37:35.457026] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.225 [2024-04-17 15:37:35.457068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.225 [2024-04-17 15:37:35.457082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:34.225 [2024-04-17 15:37:35.461289] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.225 [2024-04-17 15:37:35.461330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.225 [2024-04-17 15:37:35.461344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:34.225 [2024-04-17 15:37:35.465496] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.225 [2024-04-17 15:37:35.465537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.225 [2024-04-17 15:37:35.465551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:34.225 [2024-04-17 15:37:35.469775] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.225 [2024-04-17 15:37:35.469815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.225 [2024-04-17 15:37:35.469829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:34.225 [2024-04-17 15:37:35.474071] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.225 [2024-04-17 15:37:35.474110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.225 [2024-04-17 15:37:35.474124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:34.225 [2024-04-17 15:37:35.478443] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.225 [2024-04-17 15:37:35.478492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.225 [2024-04-17 15:37:35.478506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:34.225 [2024-04-17 15:37:35.482724] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.225 [2024-04-17 15:37:35.482774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.225 [2024-04-17 15:37:35.482788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:34.225 [2024-04-17 15:37:35.486924] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.225 [2024-04-17 15:37:35.486961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.225 [2024-04-17 15:37:35.486975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:34.225 [2024-04-17 15:37:35.491238] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.225 [2024-04-17 15:37:35.491279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.225 [2024-04-17 15:37:35.491293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:34.225 [2024-04-17 15:37:35.495604] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.225 [2024-04-17 15:37:35.495661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.225 [2024-04-17 15:37:35.495675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:34.225 [2024-04-17 15:37:35.500011] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.225 [2024-04-17 15:37:35.500067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.225 [2024-04-17 15:37:35.500081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:34.225 [2024-04-17 15:37:35.504163] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.225 [2024-04-17 15:37:35.504218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.225 [2024-04-17 15:37:35.504232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:34.225 [2024-04-17 15:37:35.508402] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.225 [2024-04-17 15:37:35.508443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.225 [2024-04-17 15:37:35.508457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:34.225 [2024-04-17 15:37:35.512784] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.225 [2024-04-17 15:37:35.512823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.225 [2024-04-17 15:37:35.512837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:34.225 [2024-04-17 15:37:35.517096] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.225 [2024-04-17 15:37:35.517135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.225 [2024-04-17 15:37:35.517152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:34.225 [2024-04-17 15:37:35.521304] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.225 [2024-04-17 15:37:35.521359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.225 [2024-04-17 15:37:35.521373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:34.225 [2024-04-17 15:37:35.525460] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.225 [2024-04-17 15:37:35.525516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.225 [2024-04-17 15:37:35.525530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:34.226 [2024-04-17 15:37:35.529862] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.226 [2024-04-17 15:37:35.529900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.226 [2024-04-17 15:37:35.529915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:34.226 [2024-04-17 15:37:35.534037] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.226 [2024-04-17 15:37:35.534093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.226 [2024-04-17 15:37:35.534107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:34.226 [2024-04-17 15:37:35.538361] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.226 [2024-04-17 15:37:35.538402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.226 [2024-04-17 15:37:35.538416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:34.226 [2024-04-17 15:37:35.542589] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.226 [2024-04-17 15:37:35.542629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.226 [2024-04-17 15:37:35.542643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:34.226 [2024-04-17 15:37:35.546992] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.226 [2024-04-17 15:37:35.547030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.226 [2024-04-17 15:37:35.547044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:34.226 [2024-04-17 15:37:35.551299] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.226 [2024-04-17 15:37:35.551353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.226 [2024-04-17 15:37:35.551367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:34.226 [2024-04-17 15:37:35.555549] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.226 [2024-04-17 15:37:35.555603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.226 [2024-04-17 15:37:35.555617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:34.226 [2024-04-17 15:37:35.559951] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.226 [2024-04-17 15:37:35.560005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.226 [2024-04-17 15:37:35.560033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:34.226 [2024-04-17 15:37:35.564362] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.226 [2024-04-17 15:37:35.564418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.226 [2024-04-17 15:37:35.564432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:34.226 [2024-04-17 15:37:35.568726] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.226 [2024-04-17 15:37:35.568781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.226 [2024-04-17 15:37:35.568796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:34.226 [2024-04-17 15:37:35.573071] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.226 [2024-04-17 15:37:35.573127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.226 [2024-04-17 15:37:35.573141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:34.226 [2024-04-17 15:37:35.577412] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.226 [2024-04-17 15:37:35.577452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.226 [2024-04-17 15:37:35.577466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:34.226 [2024-04-17 15:37:35.582027] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.226 [2024-04-17 15:37:35.582082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.226 [2024-04-17 15:37:35.582097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:34.226 [2024-04-17 15:37:35.586444] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.226 [2024-04-17 15:37:35.586487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.226 [2024-04-17 15:37:35.586501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:34.226 [2024-04-17 15:37:35.590689] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.226 [2024-04-17 15:37:35.590730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.226 [2024-04-17 15:37:35.590745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:34.226 [2024-04-17 15:37:35.594943] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.226 [2024-04-17 15:37:35.594988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.226 [2024-04-17 15:37:35.595003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:34.226 [2024-04-17 15:37:35.599274] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.226 [2024-04-17 15:37:35.599315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.226 [2024-04-17 15:37:35.599328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:34.226 [2024-04-17 15:37:35.603548] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.226 [2024-04-17 15:37:35.603602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.226 [2024-04-17 15:37:35.603616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:34.226 [2024-04-17 15:37:35.607984] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.226 [2024-04-17 15:37:35.608038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.226 [2024-04-17 15:37:35.608051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:34.226 [2024-04-17 15:37:35.612363] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.226 [2024-04-17 15:37:35.612420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.226 [2024-04-17 15:37:35.612449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:34.226 [2024-04-17 15:37:35.616593] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.226 [2024-04-17 15:37:35.616649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.226 [2024-04-17 15:37:35.616662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:34.226 [2024-04-17 15:37:35.620852] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.226 [2024-04-17 15:37:35.620906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.226 [2024-04-17 15:37:35.620920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:34.226 [2024-04-17 15:37:35.625039] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.226 [2024-04-17 15:37:35.625093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.226 [2024-04-17 15:37:35.625125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:34.226 [2024-04-17 15:37:35.629418] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.226 [2024-04-17 15:37:35.629458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.226 [2024-04-17 15:37:35.629472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:34.226 [2024-04-17 15:37:35.633663] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.226 [2024-04-17 15:37:35.633702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.226 [2024-04-17 15:37:35.633717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:34.226 [2024-04-17 15:37:35.637992] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.226 [2024-04-17 15:37:35.638077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.226 [2024-04-17 15:37:35.638091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:34.226 [2024-04-17 15:37:35.642319] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.226 [2024-04-17 15:37:35.642358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.226 [2024-04-17 15:37:35.642373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:34.226 [2024-04-17 15:37:35.646629] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.226 [2024-04-17 15:37:35.646686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.226 [2024-04-17 15:37:35.646699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:34.226 [2024-04-17 15:37:35.650979] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.226 [2024-04-17 15:37:35.651045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.226 [2024-04-17 15:37:35.651059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:34.226 [2024-04-17 15:37:35.655522] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.226 [2024-04-17 15:37:35.655577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.226 [2024-04-17 15:37:35.655591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:34.226 [2024-04-17 15:37:35.659853] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.226 [2024-04-17 15:37:35.659906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.226 [2024-04-17 15:37:35.659921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:34.226 [2024-04-17 15:37:35.664147] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.226 [2024-04-17 15:37:35.664201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.226 [2024-04-17 15:37:35.664214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:34.486 [2024-04-17 15:37:35.668481] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.486 [2024-04-17 15:37:35.668537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.486 [2024-04-17 15:37:35.668550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:34.486 [2024-04-17 15:37:35.672814] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.486 [2024-04-17 15:37:35.672869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.486 [2024-04-17 15:37:35.672884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:34.486 [2024-04-17 15:37:35.677002] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.486 [2024-04-17 15:37:35.677057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.486 [2024-04-17 15:37:35.677070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:34.486 [2024-04-17 15:37:35.681328] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.486 [2024-04-17 15:37:35.681387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.486 [2024-04-17 15:37:35.681400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:34.486 [2024-04-17 15:37:35.685546] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.486 [2024-04-17 15:37:35.685585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.486 [2024-04-17 15:37:35.685599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:34.486 [2024-04-17 15:37:35.689733] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.486 [2024-04-17 15:37:35.689795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.486 [2024-04-17 15:37:35.689824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:34.486 [2024-04-17 15:37:35.693841] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.486 [2024-04-17 15:37:35.693896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.486 [2024-04-17 15:37:35.693910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:34.486 [2024-04-17 15:37:35.698022] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.486 [2024-04-17 15:37:35.698078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.486 [2024-04-17 15:37:35.698092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:34.486 [2024-04-17 15:37:35.702385] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.486 [2024-04-17 15:37:35.702439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.486 [2024-04-17 15:37:35.702454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:34.486 [2024-04-17 15:37:35.706755] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.486 [2024-04-17 15:37:35.706804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.486 [2024-04-17 15:37:35.706819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:34.486 [2024-04-17 15:37:35.710854] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.486 [2024-04-17 15:37:35.710893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.486 [2024-04-17 15:37:35.710907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:34.486 [2024-04-17 15:37:35.715069] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.486 [2024-04-17 15:37:35.715105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.486 [2024-04-17 15:37:35.715119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:34.486 [2024-04-17 15:37:35.719308] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.486 [2024-04-17 15:37:35.719379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.486 [2024-04-17 15:37:35.719408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:34.486 [2024-04-17 15:37:35.723640] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.486 [2024-04-17 15:37:35.723694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.486 [2024-04-17 15:37:35.723708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:34.486 [2024-04-17 15:37:35.728119] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.486 [2024-04-17 15:37:35.728159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.486 [2024-04-17 15:37:35.728173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:34.486 [2024-04-17 15:37:35.732511] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.486 [2024-04-17 15:37:35.732566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.486 [2024-04-17 15:37:35.732597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:34.486 [2024-04-17 15:37:35.736956] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.486 [2024-04-17 15:37:35.737009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.486 [2024-04-17 15:37:35.737039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:34.486 [2024-04-17 15:37:35.741365] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.486 [2024-04-17 15:37:35.741420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.486 [2024-04-17 15:37:35.741466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:34.486 [2024-04-17 15:37:35.745630] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.486 [2024-04-17 15:37:35.745686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.486 [2024-04-17 15:37:35.745701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:34.486 [2024-04-17 15:37:35.750036] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.486 [2024-04-17 15:37:35.750091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.486 [2024-04-17 15:37:35.750105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:34.486 [2024-04-17 15:37:35.754260] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.486 [2024-04-17 15:37:35.754300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.486 [2024-04-17 15:37:35.754315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:34.486 [2024-04-17 15:37:35.758593] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.486 [2024-04-17 15:37:35.758631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.486 [2024-04-17 15:37:35.758645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:34.486 [2024-04-17 15:37:35.762819] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.486 [2024-04-17 15:37:35.762858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.486 [2024-04-17 15:37:35.762872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:34.486 [2024-04-17 15:37:35.767105] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.486 [2024-04-17 15:37:35.767145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.486 [2024-04-17 15:37:35.767158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:34.486 [2024-04-17 15:37:35.771269] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.486 [2024-04-17 15:37:35.771325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.486 [2024-04-17 15:37:35.771339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:34.486 [2024-04-17 15:37:35.775666] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.486 [2024-04-17 15:37:35.775705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.486 [2024-04-17 15:37:35.775733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:34.486 [2024-04-17 15:37:35.780163] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.486 [2024-04-17 15:37:35.780203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.486 [2024-04-17 15:37:35.780216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:34.486 [2024-04-17 15:37:35.784589] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.486 [2024-04-17 15:37:35.784629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.486 [2024-04-17 15:37:35.784643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:34.486 [2024-04-17 15:37:35.789128] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.486 [2024-04-17 15:37:35.789184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.486 [2024-04-17 15:37:35.789197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:34.486 [2024-04-17 15:37:35.793655] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.486 [2024-04-17 15:37:35.793695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.486 [2024-04-17 15:37:35.793709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:34.486 [2024-04-17 15:37:35.798144] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.486 [2024-04-17 15:37:35.798200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.486 [2024-04-17 15:37:35.798214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:34.486 [2024-04-17 15:37:35.802389] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.486 [2024-04-17 15:37:35.802452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.486 [2024-04-17 15:37:35.802467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:34.486 [2024-04-17 15:37:35.806816] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.486 [2024-04-17 15:37:35.806855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.486 [2024-04-17 15:37:35.806869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:34.486 [2024-04-17 15:37:35.811187] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.486 [2024-04-17 15:37:35.811229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.486 [2024-04-17 15:37:35.811243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:34.486 [2024-04-17 15:37:35.815594] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.486 [2024-04-17 15:37:35.815635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.486 [2024-04-17 15:37:35.815650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:34.486 [2024-04-17 15:37:35.819942] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.486 [2024-04-17 15:37:35.819990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.486 [2024-04-17 15:37:35.820005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:34.486 [2024-04-17 15:37:35.824352] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.486 [2024-04-17 15:37:35.824392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.486 [2024-04-17 15:37:35.824406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:34.486 [2024-04-17 15:37:35.828692] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.486 [2024-04-17 15:37:35.828732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.486 [2024-04-17 15:37:35.828746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:34.487 [2024-04-17 15:37:35.832879] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.487 [2024-04-17 15:37:35.832918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.487 [2024-04-17 15:37:35.832932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:34.487 [2024-04-17 15:37:35.837142] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.487 [2024-04-17 15:37:35.837180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.487 [2024-04-17 15:37:35.837194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:34.487 [2024-04-17 15:37:35.841516] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.487 [2024-04-17 15:37:35.841558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.487 [2024-04-17 15:37:35.841572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:34.487 [2024-04-17 15:37:35.845806] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.487 [2024-04-17 15:37:35.845844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.487 [2024-04-17 15:37:35.845858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:34.487 [2024-04-17 15:37:35.850023] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.487 [2024-04-17 15:37:35.850063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.487 [2024-04-17 15:37:35.850077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:34.487 [2024-04-17 15:37:35.854280] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.487 [2024-04-17 15:37:35.854320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.487 [2024-04-17 15:37:35.854334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:34.487 [2024-04-17 15:37:35.858559] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.487 [2024-04-17 15:37:35.858600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.487 [2024-04-17 15:37:35.858614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:34.487 [2024-04-17 15:37:35.862799] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.487 [2024-04-17 15:37:35.862837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.487 [2024-04-17 15:37:35.862851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:34.487 [2024-04-17 15:37:35.866941] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.487 [2024-04-17 15:37:35.866977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.487 [2024-04-17 15:37:35.867000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:34.487 [2024-04-17 15:37:35.871171] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.487 [2024-04-17 15:37:35.871211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.487 [2024-04-17 15:37:35.871224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:34.487 [2024-04-17 15:37:35.875471] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.487 [2024-04-17 15:37:35.875510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.487 [2024-04-17 15:37:35.875525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:34.487 [2024-04-17 15:37:35.879902] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.487 [2024-04-17 15:37:35.879941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.487 [2024-04-17 15:37:35.879956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:34.487 [2024-04-17 15:37:35.884114] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.487 [2024-04-17 15:37:35.884153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.487 [2024-04-17 15:37:35.884166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:34.487 [2024-04-17 15:37:35.888510] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.487 [2024-04-17 15:37:35.888560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.487 [2024-04-17 15:37:35.888574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:34.487 [2024-04-17 15:37:35.892943] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.487 [2024-04-17 15:37:35.892982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.487 [2024-04-17 15:37:35.892996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:34.487 [2024-04-17 15:37:35.897344] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.487 [2024-04-17 15:37:35.897384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.487 [2024-04-17 15:37:35.897398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:34.487 [2024-04-17 15:37:35.901838] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.487 [2024-04-17 15:37:35.901892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.487 [2024-04-17 15:37:35.901906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:34.487 [2024-04-17 15:37:35.906210] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.487 [2024-04-17 15:37:35.906269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.487 [2024-04-17 15:37:35.906284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:34.487 [2024-04-17 15:37:35.910676] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.487 [2024-04-17 15:37:35.910717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.487 [2024-04-17 15:37:35.910731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:34.487 [2024-04-17 15:37:35.915144] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.487 [2024-04-17 15:37:35.915184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.487 [2024-04-17 15:37:35.915198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:34.487 [2024-04-17 15:37:35.919486] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.487 [2024-04-17 15:37:35.919529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.487 [2024-04-17 15:37:35.919543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:34.487 [2024-04-17 15:37:35.923894] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.487 [2024-04-17 15:37:35.923935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.487 [2024-04-17 15:37:35.923948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:34.746 [2024-04-17 15:37:35.928091] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.746 [2024-04-17 15:37:35.928131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.746 [2024-04-17 15:37:35.928146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:34.746 [2024-04-17 15:37:35.932276] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.746 [2024-04-17 15:37:35.932317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.746 [2024-04-17 15:37:35.932330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:34.746 [2024-04-17 15:37:35.936480] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.746 [2024-04-17 15:37:35.936521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.746 [2024-04-17 15:37:35.936535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:34.746 [2024-04-17 15:37:35.940780] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.746 [2024-04-17 15:37:35.940821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.746 [2024-04-17 15:37:35.940835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:34.746 [2024-04-17 15:37:35.944974] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.746 [2024-04-17 15:37:35.945013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.746 [2024-04-17 15:37:35.945027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:34.746 [2024-04-17 15:37:35.949252] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.746 [2024-04-17 15:37:35.949292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.746 [2024-04-17 15:37:35.949305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:34.746 [2024-04-17 15:37:35.953386] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.746 [2024-04-17 15:37:35.953425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.746 [2024-04-17 15:37:35.953439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:34.746 [2024-04-17 15:37:35.957645] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.746 [2024-04-17 15:37:35.957686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.746 [2024-04-17 15:37:35.957700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:34.746 [2024-04-17 15:37:35.961967] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.746 [2024-04-17 15:37:35.962006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.746 [2024-04-17 15:37:35.962020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:34.746 [2024-04-17 15:37:35.966226] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.746 [2024-04-17 15:37:35.966266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.746 [2024-04-17 15:37:35.966280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:34.746 [2024-04-17 15:37:35.970460] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.746 [2024-04-17 15:37:35.970501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.746 [2024-04-17 15:37:35.970514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:34.746 [2024-04-17 15:37:35.974775] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.746 [2024-04-17 15:37:35.974814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.746 [2024-04-17 15:37:35.974829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:34.746 [2024-04-17 15:37:35.979087] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.746 [2024-04-17 15:37:35.979128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.746 [2024-04-17 15:37:35.979141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:34.746 [2024-04-17 15:37:35.983464] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.746 [2024-04-17 15:37:35.983504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.746 [2024-04-17 15:37:35.983518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:34.746 [2024-04-17 15:37:35.987795] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.746 [2024-04-17 15:37:35.987837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.747 [2024-04-17 15:37:35.987851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:34.747 [2024-04-17 15:37:35.992061] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.747 [2024-04-17 15:37:35.992119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.747 [2024-04-17 15:37:35.992133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:34.747 [2024-04-17 15:37:35.996430] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.747 [2024-04-17 15:37:35.996472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.747 [2024-04-17 15:37:35.996486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:34.747 [2024-04-17 15:37:36.000751] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.747 [2024-04-17 15:37:36.000802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.747 [2024-04-17 15:37:36.000817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:34.747 [2024-04-17 15:37:36.005194] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.747 [2024-04-17 15:37:36.005234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.747 [2024-04-17 15:37:36.005248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:34.747 [2024-04-17 15:37:36.009467] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.747 [2024-04-17 15:37:36.009523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.747 [2024-04-17 15:37:36.009538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:34.747 [2024-04-17 15:37:36.013749] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.747 [2024-04-17 15:37:36.013801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.747 [2024-04-17 15:37:36.013815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:34.747 [2024-04-17 15:37:36.017944] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.747 [2024-04-17 15:37:36.017983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.747 [2024-04-17 15:37:36.017997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:34.747 [2024-04-17 15:37:36.022193] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.747 [2024-04-17 15:37:36.022232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.747 [2024-04-17 15:37:36.022247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:34.747 [2024-04-17 15:37:36.026404] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.747 [2024-04-17 15:37:36.026446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.747 [2024-04-17 15:37:36.026460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:34.747 [2024-04-17 15:37:36.030645] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.747 [2024-04-17 15:37:36.030696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.747 [2024-04-17 15:37:36.030711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:34.747 [2024-04-17 15:37:36.035122] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.747 [2024-04-17 15:37:36.035162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.747 [2024-04-17 15:37:36.035176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:34.747 [2024-04-17 15:37:36.039501] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.747 [2024-04-17 15:37:36.039540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.747 [2024-04-17 15:37:36.039554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:34.747 [2024-04-17 15:37:36.043896] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.747 [2024-04-17 15:37:36.043936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.747 [2024-04-17 15:37:36.043950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:34.747 [2024-04-17 15:37:36.048154] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.747 [2024-04-17 15:37:36.048209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.747 [2024-04-17 15:37:36.048222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:34.747 [2024-04-17 15:37:36.052418] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.747 [2024-04-17 15:37:36.052473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.747 [2024-04-17 15:37:36.052486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:34.747 [2024-04-17 15:37:36.056492] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.747 [2024-04-17 15:37:36.056546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.747 [2024-04-17 15:37:36.056560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:34.747 [2024-04-17 15:37:36.060720] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.747 [2024-04-17 15:37:36.060792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.747 [2024-04-17 15:37:36.060807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:34.747 [2024-04-17 15:37:36.064882] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.747 [2024-04-17 15:37:36.064937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.747 [2024-04-17 15:37:36.064951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:34.747 [2024-04-17 15:37:36.069236] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.747 [2024-04-17 15:37:36.069290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.747 [2024-04-17 15:37:36.069303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:34.747 [2024-04-17 15:37:36.073551] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.747 [2024-04-17 15:37:36.073604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.747 [2024-04-17 15:37:36.073617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:34.747 [2024-04-17 15:37:36.077910] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.747 [2024-04-17 15:37:36.077949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.747 [2024-04-17 15:37:36.077963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:34.747 [2024-04-17 15:37:36.082287] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.747 [2024-04-17 15:37:36.082342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.747 [2024-04-17 15:37:36.082356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:34.747 [2024-04-17 15:37:36.086709] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.747 [2024-04-17 15:37:36.086793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.747 [2024-04-17 15:37:36.086808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:34.747 [2024-04-17 15:37:36.091089] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.747 [2024-04-17 15:37:36.091130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.747 [2024-04-17 15:37:36.091144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:34.747 [2024-04-17 15:37:36.095424] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.747 [2024-04-17 15:37:36.095465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.747 [2024-04-17 15:37:36.095480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:34.747 [2024-04-17 15:37:36.099754] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.747 [2024-04-17 15:37:36.099805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.747 [2024-04-17 15:37:36.099820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:34.747 [2024-04-17 15:37:36.104043] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.747 [2024-04-17 15:37:36.104098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.747 [2024-04-17 15:37:36.104112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:34.747 [2024-04-17 15:37:36.108271] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.747 [2024-04-17 15:37:36.108329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.747 [2024-04-17 15:37:36.108343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:34.747 [2024-04-17 15:37:36.112431] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.747 [2024-04-17 15:37:36.112471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.747 [2024-04-17 15:37:36.112485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:34.747 [2024-04-17 15:37:36.116803] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.747 [2024-04-17 15:37:36.116841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.747 [2024-04-17 15:37:36.116855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:34.747 [2024-04-17 15:37:36.120942] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.747 [2024-04-17 15:37:36.120980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.747 [2024-04-17 15:37:36.120994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:34.747 [2024-04-17 15:37:36.125187] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.747 [2024-04-17 15:37:36.125243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.747 [2024-04-17 15:37:36.125257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:34.747 [2024-04-17 15:37:36.129500] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.747 [2024-04-17 15:37:36.129540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.747 [2024-04-17 15:37:36.129553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:34.747 [2024-04-17 15:37:36.133925] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.747 [2024-04-17 15:37:36.133978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.747 [2024-04-17 15:37:36.133992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:34.747 [2024-04-17 15:37:36.138245] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.747 [2024-04-17 15:37:36.138284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.747 [2024-04-17 15:37:36.138298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:34.747 [2024-04-17 15:37:36.142552] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.747 [2024-04-17 15:37:36.142592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.747 [2024-04-17 15:37:36.142606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:34.747 [2024-04-17 15:37:36.146970] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.747 [2024-04-17 15:37:36.147016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.747 [2024-04-17 15:37:36.147031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:34.747 [2024-04-17 15:37:36.151370] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.747 [2024-04-17 15:37:36.151441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.747 [2024-04-17 15:37:36.151470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:34.747 [2024-04-17 15:37:36.155861] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.747 [2024-04-17 15:37:36.155914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.747 [2024-04-17 15:37:36.155928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:34.747 [2024-04-17 15:37:36.160242] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.747 [2024-04-17 15:37:36.160282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.747 [2024-04-17 15:37:36.160297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:34.747 [2024-04-17 15:37:36.164379] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.747 [2024-04-17 15:37:36.164420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.747 [2024-04-17 15:37:36.164434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:34.747 [2024-04-17 15:37:36.168654] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.747 [2024-04-17 15:37:36.168695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.747 [2024-04-17 15:37:36.168709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:34.747 [2024-04-17 15:37:36.172955] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.747 [2024-04-17 15:37:36.173025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.747 [2024-04-17 15:37:36.173039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:34.747 [2024-04-17 15:37:36.177338] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.747 [2024-04-17 15:37:36.177379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.747 [2024-04-17 15:37:36.177393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:34.747 [2024-04-17 15:37:36.181809] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.747 [2024-04-17 15:37:36.181861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.747 [2024-04-17 15:37:36.181876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:34.747 [2024-04-17 15:37:36.186192] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:34.747 [2024-04-17 15:37:36.186232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.747 [2024-04-17 15:37:36.186247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:35.008 [2024-04-17 15:37:36.190586] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.008 [2024-04-17 15:37:36.190629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.008 [2024-04-17 15:37:36.190644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:35.008 [2024-04-17 15:37:36.194954] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.008 [2024-04-17 15:37:36.195012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.008 [2024-04-17 15:37:36.195026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:35.008 [2024-04-17 15:37:36.199340] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.008 [2024-04-17 15:37:36.199380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.008 [2024-04-17 15:37:36.199395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:35.008 [2024-04-17 15:37:36.203664] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.008 [2024-04-17 15:37:36.203704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.008 [2024-04-17 15:37:36.203718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:35.008 [2024-04-17 15:37:36.208104] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.008 [2024-04-17 15:37:36.208142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.008 [2024-04-17 15:37:36.208156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:35.008 [2024-04-17 15:37:36.212558] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.009 [2024-04-17 15:37:36.212600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.009 [2024-04-17 15:37:36.212615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:35.009 [2024-04-17 15:37:36.216934] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.009 [2024-04-17 15:37:36.216989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.009 [2024-04-17 15:37:36.217003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:35.009 [2024-04-17 15:37:36.221746] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.009 [2024-04-17 15:37:36.221798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.009 [2024-04-17 15:37:36.221812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:35.009 [2024-04-17 15:37:36.225938] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.009 [2024-04-17 15:37:36.225976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.009 [2024-04-17 15:37:36.225990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:35.009 [2024-04-17 15:37:36.230248] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.009 [2024-04-17 15:37:36.230287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.009 [2024-04-17 15:37:36.230301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:35.009 [2024-04-17 15:37:36.234547] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.009 [2024-04-17 15:37:36.234586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.009 [2024-04-17 15:37:36.234600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:35.009 [2024-04-17 15:37:36.238906] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.009 [2024-04-17 15:37:36.238943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.009 [2024-04-17 15:37:36.238957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:35.009 [2024-04-17 15:37:36.243154] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.009 [2024-04-17 15:37:36.243190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.009 [2024-04-17 15:37:36.243205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:35.009 [2024-04-17 15:37:36.247361] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.009 [2024-04-17 15:37:36.247430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.009 [2024-04-17 15:37:36.247444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:35.009 [2024-04-17 15:37:36.251712] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.009 [2024-04-17 15:37:36.251776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.009 [2024-04-17 15:37:36.251791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:35.009 [2024-04-17 15:37:36.256020] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.009 [2024-04-17 15:37:36.256074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.009 [2024-04-17 15:37:36.256088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:35.009 [2024-04-17 15:37:36.260256] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.009 [2024-04-17 15:37:36.260295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.009 [2024-04-17 15:37:36.260308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:35.009 [2024-04-17 15:37:36.264499] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.009 [2024-04-17 15:37:36.264554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.009 [2024-04-17 15:37:36.264568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:35.009 [2024-04-17 15:37:36.268739] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.009 [2024-04-17 15:37:36.268819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.009 [2024-04-17 15:37:36.268833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:35.009 [2024-04-17 15:37:36.273040] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.009 [2024-04-17 15:37:36.273093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.009 [2024-04-17 15:37:36.273107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:35.009 [2024-04-17 15:37:36.277237] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.009 [2024-04-17 15:37:36.277291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.009 [2024-04-17 15:37:36.277305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:35.009 [2024-04-17 15:37:36.281626] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.009 [2024-04-17 15:37:36.281666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.009 [2024-04-17 15:37:36.281680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:35.009 [2024-04-17 15:37:36.286123] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.009 [2024-04-17 15:37:36.286161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.009 [2024-04-17 15:37:36.286175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:35.009 [2024-04-17 15:37:36.290418] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.009 [2024-04-17 15:37:36.290457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.009 [2024-04-17 15:37:36.290472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:35.009 [2024-04-17 15:37:36.294725] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.009 [2024-04-17 15:37:36.294775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.009 [2024-04-17 15:37:36.294790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:35.009 [2024-04-17 15:37:36.298990] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.009 [2024-04-17 15:37:36.299044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.009 [2024-04-17 15:37:36.299058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:35.009 [2024-04-17 15:37:36.303391] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.009 [2024-04-17 15:37:36.303446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.009 [2024-04-17 15:37:36.303460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:35.009 [2024-04-17 15:37:36.307951] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.009 [2024-04-17 15:37:36.308019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.009 [2024-04-17 15:37:36.308048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:35.009 [2024-04-17 15:37:36.312411] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.009 [2024-04-17 15:37:36.312466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.009 [2024-04-17 15:37:36.312480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:35.009 [2024-04-17 15:37:36.316801] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.009 [2024-04-17 15:37:36.316853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.009 [2024-04-17 15:37:36.316867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:35.009 [2024-04-17 15:37:36.321137] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.009 [2024-04-17 15:37:36.321199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.009 [2024-04-17 15:37:36.321213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:35.009 [2024-04-17 15:37:36.325442] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.009 [2024-04-17 15:37:36.325497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.009 [2024-04-17 15:37:36.325511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:35.009 [2024-04-17 15:37:36.329851] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.009 [2024-04-17 15:37:36.329889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.010 [2024-04-17 15:37:36.329902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:35.010 [2024-04-17 15:37:36.334162] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.010 [2024-04-17 15:37:36.334201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.010 [2024-04-17 15:37:36.334215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:35.010 [2024-04-17 15:37:36.338371] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.010 [2024-04-17 15:37:36.338410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.010 [2024-04-17 15:37:36.338424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:35.010 [2024-04-17 15:37:36.342887] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.010 [2024-04-17 15:37:36.342925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.010 [2024-04-17 15:37:36.342939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:35.010 [2024-04-17 15:37:36.347228] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.010 [2024-04-17 15:37:36.347263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.010 [2024-04-17 15:37:36.347277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:35.010 [2024-04-17 15:37:36.351536] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.010 [2024-04-17 15:37:36.351573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.010 [2024-04-17 15:37:36.351587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:35.010 [2024-04-17 15:37:36.355795] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.010 [2024-04-17 15:37:36.355829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.010 [2024-04-17 15:37:36.355844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:35.010 [2024-04-17 15:37:36.359996] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.010 [2024-04-17 15:37:36.360031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.010 [2024-04-17 15:37:36.360045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:35.010 [2024-04-17 15:37:36.364262] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.010 [2024-04-17 15:37:36.364300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.010 [2024-04-17 15:37:36.364314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:35.010 [2024-04-17 15:37:36.368450] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.010 [2024-04-17 15:37:36.368488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.010 [2024-04-17 15:37:36.368502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:35.010 [2024-04-17 15:37:36.372775] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.010 [2024-04-17 15:37:36.372826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.010 [2024-04-17 15:37:36.372849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:35.010 [2024-04-17 15:37:36.377222] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.010 [2024-04-17 15:37:36.377276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.010 [2024-04-17 15:37:36.377289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:35.010 [2024-04-17 15:37:36.381703] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.010 [2024-04-17 15:37:36.381743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.010 [2024-04-17 15:37:36.381775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:35.010 [2024-04-17 15:37:36.386131] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.010 [2024-04-17 15:37:36.386183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.010 [2024-04-17 15:37:36.386213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:35.010 [2024-04-17 15:37:36.390629] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.010 [2024-04-17 15:37:36.390668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.010 [2024-04-17 15:37:36.390683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:35.010 [2024-04-17 15:37:36.394936] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.010 [2024-04-17 15:37:36.394974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.010 [2024-04-17 15:37:36.394999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:35.010 [2024-04-17 15:37:36.399286] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.010 [2024-04-17 15:37:36.399325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.010 [2024-04-17 15:37:36.399339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:35.010 [2024-04-17 15:37:36.403656] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.010 [2024-04-17 15:37:36.403711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.010 [2024-04-17 15:37:36.403725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:35.010 [2024-04-17 15:37:36.408087] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.010 [2024-04-17 15:37:36.408141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.010 [2024-04-17 15:37:36.408155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:35.010 [2024-04-17 15:37:36.412463] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.010 [2024-04-17 15:37:36.412516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.010 [2024-04-17 15:37:36.412546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:35.010 [2024-04-17 15:37:36.416828] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.010 [2024-04-17 15:37:36.416870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.010 [2024-04-17 15:37:36.416885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:35.010 [2024-04-17 15:37:36.421181] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.010 [2024-04-17 15:37:36.421241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.010 [2024-04-17 15:37:36.421255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:35.010 [2024-04-17 15:37:36.425517] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.010 [2024-04-17 15:37:36.425572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.010 [2024-04-17 15:37:36.425586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:35.010 [2024-04-17 15:37:36.430032] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.010 [2024-04-17 15:37:36.430087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.010 [2024-04-17 15:37:36.430101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:35.010 [2024-04-17 15:37:36.434455] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.010 [2024-04-17 15:37:36.434525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.010 [2024-04-17 15:37:36.434539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:35.010 [2024-04-17 15:37:36.439013] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.010 [2024-04-17 15:37:36.439052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.010 [2024-04-17 15:37:36.439065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:35.010 [2024-04-17 15:37:36.443292] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.010 [2024-04-17 15:37:36.443335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.010 [2024-04-17 15:37:36.443349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:35.010 [2024-04-17 15:37:36.447583] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.010 [2024-04-17 15:37:36.447639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.011 [2024-04-17 15:37:36.447653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:35.270 [2024-04-17 15:37:36.451883] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.270 [2024-04-17 15:37:36.451936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.270 [2024-04-17 15:37:36.451950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:35.270 [2024-04-17 15:37:36.456107] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.270 [2024-04-17 15:37:36.456163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.270 [2024-04-17 15:37:36.456177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:35.270 [2024-04-17 15:37:36.460477] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.270 [2024-04-17 15:37:36.460517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.270 [2024-04-17 15:37:36.460531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:35.270 [2024-04-17 15:37:36.464832] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.270 [2024-04-17 15:37:36.464901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.270 [2024-04-17 15:37:36.464914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:35.270 [2024-04-17 15:37:36.469476] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.270 [2024-04-17 15:37:36.469516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.270 [2024-04-17 15:37:36.469530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:35.270 [2024-04-17 15:37:36.473719] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.270 [2024-04-17 15:37:36.473769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.270 [2024-04-17 15:37:36.473784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:35.270 [2024-04-17 15:37:36.477974] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.270 [2024-04-17 15:37:36.478028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.270 [2024-04-17 15:37:36.478041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:35.270 [2024-04-17 15:37:36.482234] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.270 [2024-04-17 15:37:36.482273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.270 [2024-04-17 15:37:36.482286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:35.270 [2024-04-17 15:37:36.486651] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.270 [2024-04-17 15:37:36.486692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.270 [2024-04-17 15:37:36.486706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:35.270 [2024-04-17 15:37:36.490971] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.270 [2024-04-17 15:37:36.491019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.270 [2024-04-17 15:37:36.491033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:35.270 [2024-04-17 15:37:36.495275] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.270 [2024-04-17 15:37:36.495316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.270 [2024-04-17 15:37:36.495330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:35.270 [2024-04-17 15:37:36.499607] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.270 [2024-04-17 15:37:36.499647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.270 [2024-04-17 15:37:36.499661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:35.270 [2024-04-17 15:37:36.503964] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.271 [2024-04-17 15:37:36.504018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-04-17 15:37:36.504031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:35.271 [2024-04-17 15:37:36.508213] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.271 [2024-04-17 15:37:36.508268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-04-17 15:37:36.508281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:35.271 [2024-04-17 15:37:36.512466] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.271 [2024-04-17 15:37:36.512522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-04-17 15:37:36.512536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:35.271 [2024-04-17 15:37:36.516853] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.271 [2024-04-17 15:37:36.516892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-04-17 15:37:36.516914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:35.271 [2024-04-17 15:37:36.521161] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.271 [2024-04-17 15:37:36.521215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-04-17 15:37:36.521229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:35.271 [2024-04-17 15:37:36.525676] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.271 [2024-04-17 15:37:36.525718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-04-17 15:37:36.525732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:35.271 [2024-04-17 15:37:36.530205] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.271 [2024-04-17 15:37:36.530259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-04-17 15:37:36.530273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:35.271 [2024-04-17 15:37:36.534603] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.271 [2024-04-17 15:37:36.534658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-04-17 15:37:36.534672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:35.271 [2024-04-17 15:37:36.538951] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.271 [2024-04-17 15:37:36.539000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-04-17 15:37:36.539015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:35.271 [2024-04-17 15:37:36.543212] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.271 [2024-04-17 15:37:36.543251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-04-17 15:37:36.543265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:35.271 [2024-04-17 15:37:36.547533] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.271 [2024-04-17 15:37:36.547588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-04-17 15:37:36.547619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:35.271 [2024-04-17 15:37:36.552092] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.271 [2024-04-17 15:37:36.552130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-04-17 15:37:36.552144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:35.271 [2024-04-17 15:37:36.556346] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.271 [2024-04-17 15:37:36.556387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-04-17 15:37:36.556401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:35.271 [2024-04-17 15:37:36.560738] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.271 [2024-04-17 15:37:36.560785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-04-17 15:37:36.560798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:35.271 [2024-04-17 15:37:36.565124] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.271 [2024-04-17 15:37:36.565164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-04-17 15:37:36.565178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:35.271 [2024-04-17 15:37:36.569500] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.271 [2024-04-17 15:37:36.569539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-04-17 15:37:36.569553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:35.271 [2024-04-17 15:37:36.574012] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.271 [2024-04-17 15:37:36.574065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-04-17 15:37:36.574095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:35.271 [2024-04-17 15:37:36.578532] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.271 [2024-04-17 15:37:36.578587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-04-17 15:37:36.578601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:35.271 [2024-04-17 15:37:36.582967] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.271 [2024-04-17 15:37:36.583013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-04-17 15:37:36.583028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:35.271 [2024-04-17 15:37:36.587359] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.271 [2024-04-17 15:37:36.587428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-04-17 15:37:36.587441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:35.271 [2024-04-17 15:37:36.591692] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.271 [2024-04-17 15:37:36.591732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-04-17 15:37:36.591746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:35.271 [2024-04-17 15:37:36.595912] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.271 [2024-04-17 15:37:36.595950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-04-17 15:37:36.595963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:35.271 [2024-04-17 15:37:36.600258] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.271 [2024-04-17 15:37:36.600312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-04-17 15:37:36.600343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:35.271 [2024-04-17 15:37:36.604579] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.271 [2024-04-17 15:37:36.604619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-04-17 15:37:36.604633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:35.271 [2024-04-17 15:37:36.608848] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.271 [2024-04-17 15:37:36.608901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-04-17 15:37:36.608932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:35.271 [2024-04-17 15:37:36.613123] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.271 [2024-04-17 15:37:36.613176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-04-17 15:37:36.613206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:35.271 [2024-04-17 15:37:36.617402] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.271 [2024-04-17 15:37:36.617457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-04-17 15:37:36.617487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:35.271 [2024-04-17 15:37:36.621890] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.271 [2024-04-17 15:37:36.621929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-04-17 15:37:36.621944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:35.272 [2024-04-17 15:37:36.626244] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.272 [2024-04-17 15:37:36.626298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-04-17 15:37:36.626311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:35.272 [2024-04-17 15:37:36.630655] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.272 [2024-04-17 15:37:36.630697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-04-17 15:37:36.630712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:35.272 [2024-04-17 15:37:36.634974] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.272 [2024-04-17 15:37:36.635024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-04-17 15:37:36.635038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:35.272 [2024-04-17 15:37:36.639212] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.272 [2024-04-17 15:37:36.639252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-04-17 15:37:36.639266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:35.272 [2024-04-17 15:37:36.643476] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.272 [2024-04-17 15:37:36.643531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-04-17 15:37:36.643545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:35.272 [2024-04-17 15:37:36.647927] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.272 [2024-04-17 15:37:36.647966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-04-17 15:37:36.647980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:35.272 [2024-04-17 15:37:36.652197] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.272 [2024-04-17 15:37:36.652252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-04-17 15:37:36.652266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:35.272 [2024-04-17 15:37:36.656475] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.272 [2024-04-17 15:37:36.656529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-04-17 15:37:36.656543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:35.272 [2024-04-17 15:37:36.660690] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.272 [2024-04-17 15:37:36.660745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-04-17 15:37:36.660771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:35.272 [2024-04-17 15:37:36.665072] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.272 [2024-04-17 15:37:36.665127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-04-17 15:37:36.665140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:35.272 [2024-04-17 15:37:36.669341] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.272 [2024-04-17 15:37:36.669405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-04-17 15:37:36.669434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:35.272 [2024-04-17 15:37:36.673773] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.272 [2024-04-17 15:37:36.673822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-04-17 15:37:36.673836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:35.272 [2024-04-17 15:37:36.678363] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.272 [2024-04-17 15:37:36.678418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-04-17 15:37:36.678432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:35.272 [2024-04-17 15:37:36.682711] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.272 [2024-04-17 15:37:36.682762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-04-17 15:37:36.682786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:35.272 [2024-04-17 15:37:36.687005] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.272 [2024-04-17 15:37:36.687043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-04-17 15:37:36.687056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:35.272 [2024-04-17 15:37:36.691387] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.272 [2024-04-17 15:37:36.691440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-04-17 15:37:36.691453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:35.272 [2024-04-17 15:37:36.695602] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.272 [2024-04-17 15:37:36.695672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-04-17 15:37:36.695700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:35.272 [2024-04-17 15:37:36.699913] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.272 [2024-04-17 15:37:36.699967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-04-17 15:37:36.699981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:35.272 [2024-04-17 15:37:36.704177] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.272 [2024-04-17 15:37:36.704231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-04-17 15:37:36.704244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:35.272 [2024-04-17 15:37:36.708699] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.272 [2024-04-17 15:37:36.708764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-04-17 15:37:36.708779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:35.531 [2024-04-17 15:37:36.712919] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.531 [2024-04-17 15:37:36.712972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.531 [2024-04-17 15:37:36.712985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:35.531 [2024-04-17 15:37:36.717173] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.531 [2024-04-17 15:37:36.717227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.531 [2024-04-17 15:37:36.717241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:35.531 [2024-04-17 15:37:36.721673] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.531 [2024-04-17 15:37:36.721712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.531 [2024-04-17 15:37:36.721726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:35.531 [2024-04-17 15:37:36.726130] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.531 [2024-04-17 15:37:36.726182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.531 [2024-04-17 15:37:36.726196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:35.531 [2024-04-17 15:37:36.730675] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.531 [2024-04-17 15:37:36.730716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.531 [2024-04-17 15:37:36.730731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:35.531 [2024-04-17 15:37:36.735235] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.531 [2024-04-17 15:37:36.735276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.531 [2024-04-17 15:37:36.735291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:35.531 [2024-04-17 15:37:36.739780] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.531 [2024-04-17 15:37:36.739829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.531 [2024-04-17 15:37:36.739844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:35.531 [2024-04-17 15:37:36.744251] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.531 [2024-04-17 15:37:36.744306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.531 [2024-04-17 15:37:36.744321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:35.531 [2024-04-17 15:37:36.748513] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.531 [2024-04-17 15:37:36.748567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.531 [2024-04-17 15:37:36.748581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:35.531 [2024-04-17 15:37:36.752832] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.531 [2024-04-17 15:37:36.752871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.531 [2024-04-17 15:37:36.752885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:35.531 [2024-04-17 15:37:36.757169] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.531 [2024-04-17 15:37:36.757224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.531 [2024-04-17 15:37:36.757238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:35.531 [2024-04-17 15:37:36.761376] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e8530) 00:14:35.531 [2024-04-17 15:37:36.761430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.531 [2024-04-17 15:37:36.761443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:35.531 00:14:35.531 Latency(us) 00:14:35.531 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:35.531 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:14:35.531 nvme0n1 : 2.00 7134.12 891.76 0.00 0.00 2239.72 1869.27 5004.57 00:14:35.531 =================================================================================================================== 00:14:35.531 Total : 7134.12 891.76 0.00 0.00 2239.72 1869.27 5004.57 00:14:35.531 0 00:14:35.531 15:37:36 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:14:35.531 15:37:36 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:14:35.531 15:37:36 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:14:35.531 15:37:36 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:14:35.531 | .driver_specific 00:14:35.531 | .nvme_error 00:14:35.531 | .status_code 00:14:35.531 | .command_transient_transport_error' 00:14:35.789 15:37:37 -- host/digest.sh@71 -- # (( 460 > 0 )) 00:14:35.790 15:37:37 -- host/digest.sh@73 -- # killprocess 76665 00:14:35.790 15:37:37 -- common/autotest_common.sh@936 -- # '[' -z 76665 ']' 00:14:35.790 15:37:37 -- common/autotest_common.sh@940 -- # kill -0 76665 00:14:35.790 15:37:37 -- common/autotest_common.sh@941 -- # uname 00:14:35.790 15:37:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:35.790 15:37:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76665 00:14:35.790 15:37:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:35.790 15:37:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:35.790 killing process with pid 76665 00:14:35.790 15:37:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76665' 00:14:35.790 15:37:37 -- common/autotest_common.sh@955 -- # kill 76665 00:14:35.790 Received shutdown signal, test time was about 2.000000 seconds 00:14:35.790 00:14:35.790 Latency(us) 00:14:35.790 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:35.790 =================================================================================================================== 00:14:35.790 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:35.790 15:37:37 -- common/autotest_common.sh@960 -- # wait 76665 00:14:36.048 15:37:37 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:14:36.048 15:37:37 -- host/digest.sh@54 -- # local rw bs qd 00:14:36.048 15:37:37 -- host/digest.sh@56 -- # rw=randwrite 00:14:36.049 15:37:37 -- host/digest.sh@56 -- # bs=4096 00:14:36.049 15:37:37 -- host/digest.sh@56 -- # qd=128 00:14:36.049 15:37:37 -- host/digest.sh@58 -- # bperfpid=76725 00:14:36.049 15:37:37 -- host/digest.sh@60 -- # waitforlisten 76725 /var/tmp/bperf.sock 00:14:36.049 15:37:37 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:14:36.049 15:37:37 -- common/autotest_common.sh@817 -- # '[' -z 76725 ']' 00:14:36.049 15:37:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:14:36.049 15:37:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:36.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:14:36.049 15:37:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:14:36.049 15:37:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:36.049 15:37:37 -- common/autotest_common.sh@10 -- # set +x 00:14:36.307 [2024-04-17 15:37:37.496902] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:14:36.307 [2024-04-17 15:37:37.496996] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76725 ] 00:14:36.307 [2024-04-17 15:37:37.629409] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.565 [2024-04-17 15:37:37.758614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:37.132 15:37:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:37.132 15:37:38 -- common/autotest_common.sh@850 -- # return 0 00:14:37.132 15:37:38 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:14:37.132 15:37:38 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:14:37.391 15:37:38 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:14:37.391 15:37:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:37.391 15:37:38 -- common/autotest_common.sh@10 -- # set +x 00:14:37.391 15:37:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:37.391 15:37:38 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:14:37.391 15:37:38 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:14:37.649 nvme0n1 00:14:37.649 15:37:39 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:14:37.649 15:37:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:37.649 15:37:39 -- common/autotest_common.sh@10 -- # set +x 00:14:37.649 15:37:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:37.649 15:37:39 -- host/digest.sh@69 -- # bperf_py perform_tests 00:14:37.649 15:37:39 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:14:37.909 Running I/O for 2 seconds... 00:14:37.909 [2024-04-17 15:37:39.141521] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190fef90 00:14:37.909 [2024-04-17 15:37:39.144163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:37.909 [2024-04-17 15:37:39.144209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:37.909 [2024-04-17 15:37:39.157846] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190feb58 00:14:37.909 [2024-04-17 15:37:39.160418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:37.909 [2024-04-17 15:37:39.160456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:14:37.909 [2024-04-17 15:37:39.173985] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190fe2e8 00:14:37.909 [2024-04-17 15:37:39.176595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:37.909 [2024-04-17 15:37:39.176647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:14:37.909 [2024-04-17 15:37:39.190316] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190fda78 00:14:37.909 [2024-04-17 15:37:39.192859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:37.909 [2024-04-17 15:37:39.192896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:14:37.909 [2024-04-17 15:37:39.206568] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190fd208 00:14:37.909 [2024-04-17 15:37:39.209070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:37.909 [2024-04-17 15:37:39.209106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:14:37.909 [2024-04-17 15:37:39.223218] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190fc998 00:14:37.909 [2024-04-17 15:37:39.225684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:37.909 [2024-04-17 15:37:39.225737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:14:37.909 [2024-04-17 15:37:39.239656] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190fc128 00:14:37.909 [2024-04-17 15:37:39.242109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:37.909 [2024-04-17 15:37:39.242145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:14:37.909 [2024-04-17 15:37:39.255879] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190fb8b8 00:14:37.909 [2024-04-17 15:37:39.258307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:37.909 [2024-04-17 15:37:39.258343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:14:37.909 [2024-04-17 15:37:39.271933] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190fb048 00:14:37.909 [2024-04-17 15:37:39.274343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:37.909 [2024-04-17 15:37:39.274380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:14:37.909 [2024-04-17 15:37:39.288050] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190fa7d8 00:14:37.909 [2024-04-17 15:37:39.290427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:3636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:37.909 [2024-04-17 15:37:39.290462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:14:37.909 [2024-04-17 15:37:39.304165] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190f9f68 00:14:37.909 [2024-04-17 15:37:39.306590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:37.909 [2024-04-17 15:37:39.306624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:14:37.909 [2024-04-17 15:37:39.320425] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190f96f8 00:14:37.909 [2024-04-17 15:37:39.322859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:37.909 [2024-04-17 15:37:39.322893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:14:37.909 [2024-04-17 15:37:39.336814] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190f8e88 00:14:37.909 [2024-04-17 15:37:39.339158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:17566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:37.909 [2024-04-17 15:37:39.339195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:14:38.168 [2024-04-17 15:37:39.352882] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190f8618 00:14:38.168 [2024-04-17 15:37:39.355183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:14330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.168 [2024-04-17 15:37:39.355220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:14:38.168 [2024-04-17 15:37:39.369020] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190f7da8 00:14:38.168 [2024-04-17 15:37:39.371301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.168 [2024-04-17 15:37:39.371368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:14:38.168 [2024-04-17 15:37:39.385102] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190f7538 00:14:38.168 [2024-04-17 15:37:39.387373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.168 [2024-04-17 15:37:39.387410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:14:38.168 [2024-04-17 15:37:39.401216] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190f6cc8 00:14:38.168 [2024-04-17 15:37:39.403481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.168 [2024-04-17 15:37:39.403520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:38.168 [2024-04-17 15:37:39.417348] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190f6458 00:14:38.168 [2024-04-17 15:37:39.419618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.168 [2024-04-17 15:37:39.419658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:14:38.168 [2024-04-17 15:37:39.433572] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190f5be8 00:14:38.168 [2024-04-17 15:37:39.435821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.168 [2024-04-17 15:37:39.435860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:14:38.168 [2024-04-17 15:37:39.449800] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190f5378 00:14:38.168 [2024-04-17 15:37:39.452003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.168 [2024-04-17 15:37:39.452041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:14:38.168 [2024-04-17 15:37:39.465905] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190f4b08 00:14:38.168 [2024-04-17 15:37:39.468082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.168 [2024-04-17 15:37:39.468119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:14:38.168 [2024-04-17 15:37:39.482056] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190f4298 00:14:38.168 [2024-04-17 15:37:39.484242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:9727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.168 [2024-04-17 15:37:39.484279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:14:38.168 [2024-04-17 15:37:39.498370] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190f3a28 00:14:38.168 [2024-04-17 15:37:39.500550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:11070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.168 [2024-04-17 15:37:39.500603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:14:38.168 [2024-04-17 15:37:39.514492] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190f31b8 00:14:38.168 [2024-04-17 15:37:39.516627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.168 [2024-04-17 15:37:39.516679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:14:38.168 [2024-04-17 15:37:39.530553] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190f2948 00:14:38.168 [2024-04-17 15:37:39.532680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.168 [2024-04-17 15:37:39.532718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:14:38.168 [2024-04-17 15:37:39.546660] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190f20d8 00:14:38.168 [2024-04-17 15:37:39.548790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.168 [2024-04-17 15:37:39.548825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:14:38.168 [2024-04-17 15:37:39.562775] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190f1868 00:14:38.168 [2024-04-17 15:37:39.564854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.168 [2024-04-17 15:37:39.564888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:14:38.168 [2024-04-17 15:37:39.578468] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190f0ff8 00:14:38.168 [2024-04-17 15:37:39.580519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.168 [2024-04-17 15:37:39.580571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:14:38.168 [2024-04-17 15:37:39.594279] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190f0788 00:14:38.169 [2024-04-17 15:37:39.596406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.169 [2024-04-17 15:37:39.596458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:14:38.427 [2024-04-17 15:37:39.610494] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190eff18 00:14:38.427 [2024-04-17 15:37:39.612513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.427 [2024-04-17 15:37:39.612549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:14:38.427 [2024-04-17 15:37:39.626563] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190ef6a8 00:14:38.427 [2024-04-17 15:37:39.628575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.427 [2024-04-17 15:37:39.628625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:14:38.427 [2024-04-17 15:37:39.642905] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190eee38 00:14:38.427 [2024-04-17 15:37:39.644902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.427 [2024-04-17 15:37:39.644937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:14:38.427 [2024-04-17 15:37:39.659236] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190ee5c8 00:14:38.427 [2024-04-17 15:37:39.661169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.427 [2024-04-17 15:37:39.661203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:38.427 [2024-04-17 15:37:39.675425] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190edd58 00:14:38.427 [2024-04-17 15:37:39.677370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.427 [2024-04-17 15:37:39.677420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:14:38.427 [2024-04-17 15:37:39.691688] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190ed4e8 00:14:38.427 [2024-04-17 15:37:39.693579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.427 [2024-04-17 15:37:39.693613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:14:38.427 [2024-04-17 15:37:39.707568] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190ecc78 00:14:38.427 [2024-04-17 15:37:39.709435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:11629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.427 [2024-04-17 15:37:39.709482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:14:38.427 [2024-04-17 15:37:39.723815] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190ec408 00:14:38.427 [2024-04-17 15:37:39.725654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.427 [2024-04-17 15:37:39.725689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:14:38.427 [2024-04-17 15:37:39.740006] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190ebb98 00:14:38.427 [2024-04-17 15:37:39.741854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.427 [2024-04-17 15:37:39.741901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:14:38.427 [2024-04-17 15:37:39.755636] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190eb328 00:14:38.427 [2024-04-17 15:37:39.757472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.427 [2024-04-17 15:37:39.757518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:14:38.427 [2024-04-17 15:37:39.771515] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190eaab8 00:14:38.427 [2024-04-17 15:37:39.773383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.427 [2024-04-17 15:37:39.773429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:14:38.427 [2024-04-17 15:37:39.787995] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190ea248 00:14:38.427 [2024-04-17 15:37:39.789761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.427 [2024-04-17 15:37:39.789794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:14:38.427 [2024-04-17 15:37:39.804328] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190e99d8 00:14:38.427 [2024-04-17 15:37:39.806069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.427 [2024-04-17 15:37:39.806133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:38.427 [2024-04-17 15:37:39.820659] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190e9168 00:14:38.427 [2024-04-17 15:37:39.822445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.427 [2024-04-17 15:37:39.822492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:38.427 [2024-04-17 15:37:39.837032] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190e88f8 00:14:38.427 [2024-04-17 15:37:39.838784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:14674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.427 [2024-04-17 15:37:39.838824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:14:38.427 [2024-04-17 15:37:39.853293] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190e8088 00:14:38.427 [2024-04-17 15:37:39.854998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:8201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.427 [2024-04-17 15:37:39.855050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:14:38.685 [2024-04-17 15:37:39.869125] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190e7818 00:14:38.685 [2024-04-17 15:37:39.870780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.685 [2024-04-17 15:37:39.870813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:38.685 [2024-04-17 15:37:39.885291] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190e6fa8 00:14:38.685 [2024-04-17 15:37:39.886976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:10478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.685 [2024-04-17 15:37:39.887016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:38.685 [2024-04-17 15:37:39.901654] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190e6738 00:14:38.685 [2024-04-17 15:37:39.903374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.685 [2024-04-17 15:37:39.903407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:38.685 [2024-04-17 15:37:39.918059] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190e5ec8 00:14:38.685 [2024-04-17 15:37:39.919779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.685 [2024-04-17 15:37:39.919837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:38.685 [2024-04-17 15:37:39.934807] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190e5658 00:14:38.685 [2024-04-17 15:37:39.936418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.685 [2024-04-17 15:37:39.936466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:14:38.685 [2024-04-17 15:37:39.951213] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190e4de8 00:14:38.685 [2024-04-17 15:37:39.952776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.685 [2024-04-17 15:37:39.952808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:14:38.685 [2024-04-17 15:37:39.967601] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190e4578 00:14:38.685 [2024-04-17 15:37:39.969191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.685 [2024-04-17 15:37:39.969225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:14:38.685 [2024-04-17 15:37:39.984425] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190e3d08 00:14:38.685 [2024-04-17 15:37:39.985999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.685 [2024-04-17 15:37:39.986033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:14:38.685 [2024-04-17 15:37:40.000444] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190e3498 00:14:38.685 [2024-04-17 15:37:40.001996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.685 [2024-04-17 15:37:40.002047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:14:38.685 [2024-04-17 15:37:40.016440] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190e2c28 00:14:38.685 [2024-04-17 15:37:40.018004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.685 [2024-04-17 15:37:40.018072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:14:38.685 [2024-04-17 15:37:40.033067] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190e23b8 00:14:38.685 [2024-04-17 15:37:40.034532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.685 [2024-04-17 15:37:40.034581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:14:38.685 [2024-04-17 15:37:40.049127] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190e1b48 00:14:38.685 [2024-04-17 15:37:40.050572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.685 [2024-04-17 15:37:40.050635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:38.685 [2024-04-17 15:37:40.065302] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190e12d8 00:14:38.685 [2024-04-17 15:37:40.066744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.685 [2024-04-17 15:37:40.066785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:38.685 [2024-04-17 15:37:40.082040] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190e0a68 00:14:38.685 [2024-04-17 15:37:40.083528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.685 [2024-04-17 15:37:40.083578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:14:38.685 [2024-04-17 15:37:40.098394] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190e01f8 00:14:38.685 [2024-04-17 15:37:40.099869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.685 [2024-04-17 15:37:40.099905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:14:38.685 [2024-04-17 15:37:40.114972] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190df988 00:14:38.685 [2024-04-17 15:37:40.116422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.685 [2024-04-17 15:37:40.116470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:14:38.943 [2024-04-17 15:37:40.131270] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190df118 00:14:38.943 [2024-04-17 15:37:40.132624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.943 [2024-04-17 15:37:40.132671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:14:38.944 [2024-04-17 15:37:40.147464] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190de8a8 00:14:38.944 [2024-04-17 15:37:40.148938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.944 [2024-04-17 15:37:40.148996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:14:38.944 [2024-04-17 15:37:40.163673] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190de038 00:14:38.944 [2024-04-17 15:37:40.165073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.944 [2024-04-17 15:37:40.165126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:14:38.944 [2024-04-17 15:37:40.185629] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190de038 00:14:38.944 [2024-04-17 15:37:40.188313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.944 [2024-04-17 15:37:40.188362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.944 [2024-04-17 15:37:40.201710] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190de8a8 00:14:38.944 [2024-04-17 15:37:40.204400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.944 [2024-04-17 15:37:40.204451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:14:38.944 [2024-04-17 15:37:40.218234] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190df118 00:14:38.944 [2024-04-17 15:37:40.220886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.944 [2024-04-17 15:37:40.220937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:14:38.944 [2024-04-17 15:37:40.234742] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190df988 00:14:38.944 [2024-04-17 15:37:40.237288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.944 [2024-04-17 15:37:40.237322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:14:38.944 [2024-04-17 15:37:40.251192] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190e01f8 00:14:38.944 [2024-04-17 15:37:40.253766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.944 [2024-04-17 15:37:40.253819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:14:38.944 [2024-04-17 15:37:40.267855] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190e0a68 00:14:38.944 [2024-04-17 15:37:40.270449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.944 [2024-04-17 15:37:40.270499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:14:38.944 [2024-04-17 15:37:40.284250] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190e12d8 00:14:38.944 [2024-04-17 15:37:40.286718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.944 [2024-04-17 15:37:40.286760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:14:38.944 [2024-04-17 15:37:40.300523] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190e1b48 00:14:38.944 [2024-04-17 15:37:40.302988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.944 [2024-04-17 15:37:40.303049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:14:38.944 [2024-04-17 15:37:40.317324] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190e23b8 00:14:38.944 [2024-04-17 15:37:40.319974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.944 [2024-04-17 15:37:40.320011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:14:38.944 [2024-04-17 15:37:40.333973] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190e2c28 00:14:38.944 [2024-04-17 15:37:40.336529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.944 [2024-04-17 15:37:40.336579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:14:38.944 [2024-04-17 15:37:40.350177] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190e3498 00:14:38.944 [2024-04-17 15:37:40.352607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:15512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.944 [2024-04-17 15:37:40.352657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:14:38.944 [2024-04-17 15:37:40.365900] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190e3d08 00:14:38.944 [2024-04-17 15:37:40.368444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.944 [2024-04-17 15:37:40.368511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:14:38.944 [2024-04-17 15:37:40.382546] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190e4578 00:14:38.944 [2024-04-17 15:37:40.384948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.944 [2024-04-17 15:37:40.385000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:14:39.203 [2024-04-17 15:37:40.398468] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190e4de8 00:14:39.203 [2024-04-17 15:37:40.400860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.203 [2024-04-17 15:37:40.400896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:14:39.203 [2024-04-17 15:37:40.415091] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190e5658 00:14:39.203 [2024-04-17 15:37:40.417408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:11018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.203 [2024-04-17 15:37:40.417442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:14:39.203 [2024-04-17 15:37:40.431450] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190e5ec8 00:14:39.203 [2024-04-17 15:37:40.433819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:18541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.203 [2024-04-17 15:37:40.433860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:14:39.203 [2024-04-17 15:37:40.447856] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190e6738 00:14:39.203 [2024-04-17 15:37:40.450218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.203 [2024-04-17 15:37:40.450266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:14:39.203 [2024-04-17 15:37:40.464331] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190e6fa8 00:14:39.203 [2024-04-17 15:37:40.466610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:3728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.203 [2024-04-17 15:37:40.466643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:14:39.203 [2024-04-17 15:37:40.480813] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190e7818 00:14:39.203 [2024-04-17 15:37:40.483102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.203 [2024-04-17 15:37:40.483138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:14:39.203 [2024-04-17 15:37:40.497272] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190e8088 00:14:39.203 [2024-04-17 15:37:40.499660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.203 [2024-04-17 15:37:40.499695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:14:39.203 [2024-04-17 15:37:40.513980] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190e88f8 00:14:39.203 [2024-04-17 15:37:40.516270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.203 [2024-04-17 15:37:40.516326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:14:39.203 [2024-04-17 15:37:40.530934] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190e9168 00:14:39.203 [2024-04-17 15:37:40.533159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.203 [2024-04-17 15:37:40.533208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:14:39.203 [2024-04-17 15:37:40.547561] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190e99d8 00:14:39.203 [2024-04-17 15:37:40.549774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.203 [2024-04-17 15:37:40.549842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:14:39.203 [2024-04-17 15:37:40.564261] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190ea248 00:14:39.203 [2024-04-17 15:37:40.566435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.203 [2024-04-17 15:37:40.566468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:14:39.203 [2024-04-17 15:37:40.580611] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190eaab8 00:14:39.203 [2024-04-17 15:37:40.582692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.203 [2024-04-17 15:37:40.582727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:14:39.203 [2024-04-17 15:37:40.596693] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190eb328 00:14:39.203 [2024-04-17 15:37:40.598738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:8879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.203 [2024-04-17 15:37:40.598780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:14:39.203 [2024-04-17 15:37:40.612951] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190ebb98 00:14:39.203 [2024-04-17 15:37:40.614978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.203 [2024-04-17 15:37:40.615021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:14:39.203 [2024-04-17 15:37:40.629259] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190ec408 00:14:39.203 [2024-04-17 15:37:40.631280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.203 [2024-04-17 15:37:40.631317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:14:39.461 [2024-04-17 15:37:40.645655] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190ecc78 00:14:39.461 [2024-04-17 15:37:40.647652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.461 [2024-04-17 15:37:40.647689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:14:39.461 [2024-04-17 15:37:40.661891] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190ed4e8 00:14:39.461 [2024-04-17 15:37:40.663913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.461 [2024-04-17 15:37:40.663950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:14:39.461 [2024-04-17 15:37:40.678222] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190edd58 00:14:39.461 [2024-04-17 15:37:40.680208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.461 [2024-04-17 15:37:40.680260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:14:39.461 [2024-04-17 15:37:40.694725] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190ee5c8 00:14:39.461 [2024-04-17 15:37:40.696674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.461 [2024-04-17 15:37:40.696712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:14:39.461 [2024-04-17 15:37:40.711128] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190eee38 00:14:39.461 [2024-04-17 15:37:40.713052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.461 [2024-04-17 15:37:40.713088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:14:39.461 [2024-04-17 15:37:40.727604] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190ef6a8 00:14:39.461 [2024-04-17 15:37:40.729621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.461 [2024-04-17 15:37:40.729656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:14:39.461 [2024-04-17 15:37:40.744079] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190eff18 00:14:39.461 [2024-04-17 15:37:40.746011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.461 [2024-04-17 15:37:40.746058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:14:39.461 [2024-04-17 15:37:40.761007] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190f0788 00:14:39.461 [2024-04-17 15:37:40.762951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.461 [2024-04-17 15:37:40.762986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:14:39.461 [2024-04-17 15:37:40.777285] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190f0ff8 00:14:39.461 [2024-04-17 15:37:40.779131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.461 [2024-04-17 15:37:40.779168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:14:39.461 [2024-04-17 15:37:40.793577] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190f1868 00:14:39.461 [2024-04-17 15:37:40.795485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.461 [2024-04-17 15:37:40.795534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:14:39.461 [2024-04-17 15:37:40.810159] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190f20d8 00:14:39.461 [2024-04-17 15:37:40.812078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.461 [2024-04-17 15:37:40.812129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:14:39.461 [2024-04-17 15:37:40.826439] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190f2948 00:14:39.461 [2024-04-17 15:37:40.828253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.461 [2024-04-17 15:37:40.828288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:14:39.461 [2024-04-17 15:37:40.842933] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190f31b8 00:14:39.461 [2024-04-17 15:37:40.844750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.461 [2024-04-17 15:37:40.844791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:14:39.461 [2024-04-17 15:37:40.859495] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190f3a28 00:14:39.462 [2024-04-17 15:37:40.861270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.462 [2024-04-17 15:37:40.861318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:39.462 [2024-04-17 15:37:40.875196] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190f4298 00:14:39.462 [2024-04-17 15:37:40.876874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:20602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.462 [2024-04-17 15:37:40.876922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:39.462 [2024-04-17 15:37:40.890894] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190f4b08 00:14:39.462 [2024-04-17 15:37:40.892638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.462 [2024-04-17 15:37:40.892701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:14:39.720 [2024-04-17 15:37:40.907043] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190f5378 00:14:39.720 [2024-04-17 15:37:40.908735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.720 [2024-04-17 15:37:40.908776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:39.720 [2024-04-17 15:37:40.923106] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190f5be8 00:14:39.720 [2024-04-17 15:37:40.924745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.720 [2024-04-17 15:37:40.924816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:39.720 [2024-04-17 15:37:40.939080] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190f6458 00:14:39.720 [2024-04-17 15:37:40.940716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.720 [2024-04-17 15:37:40.940759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:39.720 [2024-04-17 15:37:40.955181] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190f6cc8 00:14:39.720 [2024-04-17 15:37:40.956784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.720 [2024-04-17 15:37:40.956818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:39.720 [2024-04-17 15:37:40.971233] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190f7538 00:14:39.720 [2024-04-17 15:37:40.972816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.720 [2024-04-17 15:37:40.972865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:14:39.720 [2024-04-17 15:37:40.987064] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190f7da8 00:14:39.720 [2024-04-17 15:37:40.988615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.720 [2024-04-17 15:37:40.988665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:14:39.720 [2024-04-17 15:37:41.003550] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190f8618 00:14:39.720 [2024-04-17 15:37:41.005108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.720 [2024-04-17 15:37:41.005141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:14:39.720 [2024-04-17 15:37:41.020008] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190f8e88 00:14:39.720 [2024-04-17 15:37:41.021596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.720 [2024-04-17 15:37:41.021646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:14:39.720 [2024-04-17 15:37:41.036837] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190f96f8 00:14:39.720 [2024-04-17 15:37:41.038373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.720 [2024-04-17 15:37:41.038422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:14:39.720 [2024-04-17 15:37:41.052944] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190f9f68 00:14:39.720 [2024-04-17 15:37:41.054445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.720 [2024-04-17 15:37:41.054494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:14:39.720 [2024-04-17 15:37:41.068989] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190fa7d8 00:14:39.720 [2024-04-17 15:37:41.070459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:8567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.720 [2024-04-17 15:37:41.070492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:14:39.720 [2024-04-17 15:37:41.085180] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190fb048 00:14:39.720 [2024-04-17 15:37:41.086627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.720 [2024-04-17 15:37:41.086661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:39.720 [2024-04-17 15:37:41.101234] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190fb8b8 00:14:39.720 [2024-04-17 15:37:41.102681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.720 [2024-04-17 15:37:41.102715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:39.720 [2024-04-17 15:37:41.117303] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13881d0) with pdu=0x2000190fc128 00:14:39.720 [2024-04-17 15:37:41.118706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.720 [2024-04-17 15:37:41.118739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:14:39.720 00:14:39.720 Latency(us) 00:14:39.720 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.720 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:39.720 nvme0n1 : 2.01 15564.55 60.80 0.00 0.00 8216.42 2546.97 31218.97 00:14:39.720 =================================================================================================================== 00:14:39.720 Total : 15564.55 60.80 0.00 0.00 8216.42 2546.97 31218.97 00:14:39.720 0 00:14:39.720 15:37:41 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:14:39.720 15:37:41 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:14:39.720 15:37:41 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:14:39.720 | .driver_specific 00:14:39.720 | .nvme_error 00:14:39.720 | .status_code 00:14:39.720 | .command_transient_transport_error' 00:14:39.720 15:37:41 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:14:40.287 15:37:41 -- host/digest.sh@71 -- # (( 122 > 0 )) 00:14:40.287 15:37:41 -- host/digest.sh@73 -- # killprocess 76725 00:14:40.287 15:37:41 -- common/autotest_common.sh@936 -- # '[' -z 76725 ']' 00:14:40.287 15:37:41 -- common/autotest_common.sh@940 -- # kill -0 76725 00:14:40.287 15:37:41 -- common/autotest_common.sh@941 -- # uname 00:14:40.287 15:37:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:40.287 15:37:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76725 00:14:40.287 15:37:41 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:40.287 15:37:41 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:40.287 killing process with pid 76725 00:14:40.287 15:37:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76725' 00:14:40.287 Received shutdown signal, test time was about 2.000000 seconds 00:14:40.287 00:14:40.287 Latency(us) 00:14:40.287 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:40.287 =================================================================================================================== 00:14:40.287 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:40.287 15:37:41 -- common/autotest_common.sh@955 -- # kill 76725 00:14:40.287 15:37:41 -- common/autotest_common.sh@960 -- # wait 76725 00:14:40.546 15:37:41 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:14:40.546 15:37:41 -- host/digest.sh@54 -- # local rw bs qd 00:14:40.546 15:37:41 -- host/digest.sh@56 -- # rw=randwrite 00:14:40.546 15:37:41 -- host/digest.sh@56 -- # bs=131072 00:14:40.546 15:37:41 -- host/digest.sh@56 -- # qd=16 00:14:40.546 15:37:41 -- host/digest.sh@58 -- # bperfpid=76784 00:14:40.546 15:37:41 -- host/digest.sh@60 -- # waitforlisten 76784 /var/tmp/bperf.sock 00:14:40.546 15:37:41 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:14:40.546 15:37:41 -- common/autotest_common.sh@817 -- # '[' -z 76784 ']' 00:14:40.546 15:37:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:14:40.546 15:37:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:40.546 15:37:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:14:40.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:14:40.546 15:37:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:40.546 15:37:41 -- common/autotest_common.sh@10 -- # set +x 00:14:40.546 [2024-04-17 15:37:41.894525] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:14:40.546 [2024-04-17 15:37:41.894643] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76784 ] 00:14:40.546 I/O size of 131072 is greater than zero copy threshold (65536). 00:14:40.546 Zero copy mechanism will not be used. 00:14:40.804 [2024-04-17 15:37:42.033845] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.804 [2024-04-17 15:37:42.170864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:41.741 15:37:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:41.741 15:37:42 -- common/autotest_common.sh@850 -- # return 0 00:14:41.741 15:37:42 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:14:41.741 15:37:42 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:14:41.741 15:37:43 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:14:41.741 15:37:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:41.741 15:37:43 -- common/autotest_common.sh@10 -- # set +x 00:14:41.741 15:37:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:41.741 15:37:43 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:14:41.741 15:37:43 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:14:41.999 nvme0n1 00:14:41.999 15:37:43 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:14:41.999 15:37:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:41.999 15:37:43 -- common/autotest_common.sh@10 -- # set +x 00:14:41.999 15:37:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:41.999 15:37:43 -- host/digest.sh@69 -- # bperf_py perform_tests 00:14:41.999 15:37:43 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:14:42.259 I/O size of 131072 is greater than zero copy threshold (65536). 00:14:42.259 Zero copy mechanism will not be used. 00:14:42.259 Running I/O for 2 seconds... 00:14:42.259 [2024-04-17 15:37:43.461828] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.259 [2024-04-17 15:37:43.462150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.259 [2024-04-17 15:37:43.462201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:42.259 [2024-04-17 15:37:43.467240] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.259 [2024-04-17 15:37:43.467544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.259 [2024-04-17 15:37:43.467580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:42.259 [2024-04-17 15:37:43.472538] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.259 [2024-04-17 15:37:43.472856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.259 [2024-04-17 15:37:43.472885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:42.259 [2024-04-17 15:37:43.477910] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.259 [2024-04-17 15:37:43.478208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.259 [2024-04-17 15:37:43.478252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:42.259 [2024-04-17 15:37:43.483323] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.259 [2024-04-17 15:37:43.483621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.259 [2024-04-17 15:37:43.483659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:42.259 [2024-04-17 15:37:43.488563] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.259 [2024-04-17 15:37:43.488902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.259 [2024-04-17 15:37:43.488941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:42.259 [2024-04-17 15:37:43.493981] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.259 [2024-04-17 15:37:43.494277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.259 [2024-04-17 15:37:43.494319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:42.259 [2024-04-17 15:37:43.499506] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.259 [2024-04-17 15:37:43.499858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.259 [2024-04-17 15:37:43.499891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:42.259 [2024-04-17 15:37:43.505024] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.259 [2024-04-17 15:37:43.505346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.259 [2024-04-17 15:37:43.505386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:42.259 [2024-04-17 15:37:43.510568] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.259 [2024-04-17 15:37:43.510904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.259 [2024-04-17 15:37:43.510941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:42.259 [2024-04-17 15:37:43.515876] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.259 [2024-04-17 15:37:43.516169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.259 [2024-04-17 15:37:43.516197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:42.259 [2024-04-17 15:37:43.521071] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.259 [2024-04-17 15:37:43.521374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.259 [2024-04-17 15:37:43.521410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:42.260 [2024-04-17 15:37:43.526377] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.260 [2024-04-17 15:37:43.526680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.260 [2024-04-17 15:37:43.526723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:42.260 [2024-04-17 15:37:43.531678] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.260 [2024-04-17 15:37:43.531990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.260 [2024-04-17 15:37:43.532030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:42.260 [2024-04-17 15:37:43.536999] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.260 [2024-04-17 15:37:43.537304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.260 [2024-04-17 15:37:43.537349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:42.260 [2024-04-17 15:37:43.542271] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.260 [2024-04-17 15:37:43.542578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.260 [2024-04-17 15:37:43.542617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:42.260 [2024-04-17 15:37:43.547617] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.260 [2024-04-17 15:37:43.547939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.260 [2024-04-17 15:37:43.547979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:42.260 [2024-04-17 15:37:43.553056] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.260 [2024-04-17 15:37:43.553357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.260 [2024-04-17 15:37:43.553401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:42.260 [2024-04-17 15:37:43.558430] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.260 [2024-04-17 15:37:43.558722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.260 [2024-04-17 15:37:43.558772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:42.260 [2024-04-17 15:37:43.563637] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.260 [2024-04-17 15:37:43.563951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.260 [2024-04-17 15:37:43.563987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:42.260 [2024-04-17 15:37:43.568912] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.260 [2024-04-17 15:37:43.569210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.260 [2024-04-17 15:37:43.569246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:42.260 [2024-04-17 15:37:43.574101] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.260 [2024-04-17 15:37:43.574399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.260 [2024-04-17 15:37:43.574438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:42.260 [2024-04-17 15:37:43.579364] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.260 [2024-04-17 15:37:43.579678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.260 [2024-04-17 15:37:43.579727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:42.260 [2024-04-17 15:37:43.584616] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.260 [2024-04-17 15:37:43.584944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.260 [2024-04-17 15:37:43.584988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:42.260 [2024-04-17 15:37:43.589881] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.260 [2024-04-17 15:37:43.590178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.260 [2024-04-17 15:37:43.590212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:42.260 [2024-04-17 15:37:43.595178] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.260 [2024-04-17 15:37:43.595478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.260 [2024-04-17 15:37:43.595518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:42.260 [2024-04-17 15:37:43.600358] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.260 [2024-04-17 15:37:43.600663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.260 [2024-04-17 15:37:43.600696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:42.260 [2024-04-17 15:37:43.605654] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.260 [2024-04-17 15:37:43.605977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.260 [2024-04-17 15:37:43.606016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:42.260 [2024-04-17 15:37:43.610866] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.260 [2024-04-17 15:37:43.611171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.260 [2024-04-17 15:37:43.611203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:42.260 [2024-04-17 15:37:43.616157] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.260 [2024-04-17 15:37:43.616468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.260 [2024-04-17 15:37:43.616502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:42.260 [2024-04-17 15:37:43.621417] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.260 [2024-04-17 15:37:43.621714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.260 [2024-04-17 15:37:43.621762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:42.260 [2024-04-17 15:37:43.626670] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.260 [2024-04-17 15:37:43.626989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.260 [2024-04-17 15:37:43.627032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:42.260 [2024-04-17 15:37:43.631912] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.260 [2024-04-17 15:37:43.632206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.260 [2024-04-17 15:37:43.632242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:42.260 [2024-04-17 15:37:43.637110] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.260 [2024-04-17 15:37:43.637403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.260 [2024-04-17 15:37:43.637437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:42.260 [2024-04-17 15:37:43.642303] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.260 [2024-04-17 15:37:43.642598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.260 [2024-04-17 15:37:43.642636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:42.260 [2024-04-17 15:37:43.647573] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.260 [2024-04-17 15:37:43.647900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.260 [2024-04-17 15:37:43.647934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:42.260 [2024-04-17 15:37:43.652951] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.260 [2024-04-17 15:37:43.653262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.260 [2024-04-17 15:37:43.653296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:42.260 [2024-04-17 15:37:43.658173] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.260 [2024-04-17 15:37:43.658470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.260 [2024-04-17 15:37:43.658506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:42.260 [2024-04-17 15:37:43.663409] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.260 [2024-04-17 15:37:43.663709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.260 [2024-04-17 15:37:43.663743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:42.260 [2024-04-17 15:37:43.668720] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.261 [2024-04-17 15:37:43.669048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.261 [2024-04-17 15:37:43.669087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:42.261 [2024-04-17 15:37:43.673946] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.261 [2024-04-17 15:37:43.674242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.261 [2024-04-17 15:37:43.674278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:42.261 [2024-04-17 15:37:43.679119] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.261 [2024-04-17 15:37:43.679415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.261 [2024-04-17 15:37:43.679452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:42.261 [2024-04-17 15:37:43.684319] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.261 [2024-04-17 15:37:43.684630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.261 [2024-04-17 15:37:43.684664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:42.261 [2024-04-17 15:37:43.689568] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.261 [2024-04-17 15:37:43.689880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.261 [2024-04-17 15:37:43.689905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:42.261 [2024-04-17 15:37:43.694726] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.261 [2024-04-17 15:37:43.695049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.261 [2024-04-17 15:37:43.695124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:42.261 [2024-04-17 15:37:43.700057] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.261 [2024-04-17 15:37:43.700370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.261 [2024-04-17 15:37:43.700405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:42.522 [2024-04-17 15:37:43.705270] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.522 [2024-04-17 15:37:43.705574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.522 [2024-04-17 15:37:43.705608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:42.522 [2024-04-17 15:37:43.710478] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.522 [2024-04-17 15:37:43.710773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.522 [2024-04-17 15:37:43.710820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:42.522 [2024-04-17 15:37:43.715806] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.522 [2024-04-17 15:37:43.716116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.522 [2024-04-17 15:37:43.716141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:42.522 [2024-04-17 15:37:43.720993] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.522 [2024-04-17 15:37:43.721307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.522 [2024-04-17 15:37:43.721341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:42.522 [2024-04-17 15:37:43.726238] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.522 [2024-04-17 15:37:43.726539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.522 [2024-04-17 15:37:43.726577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:42.522 [2024-04-17 15:37:43.731536] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.522 [2024-04-17 15:37:43.731847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.522 [2024-04-17 15:37:43.731880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:42.522 [2024-04-17 15:37:43.736778] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.522 [2024-04-17 15:37:43.737074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.522 [2024-04-17 15:37:43.737117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:42.522 [2024-04-17 15:37:43.741976] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.522 [2024-04-17 15:37:43.742270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.522 [2024-04-17 15:37:43.742304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:42.522 [2024-04-17 15:37:43.747174] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.522 [2024-04-17 15:37:43.747470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.522 [2024-04-17 15:37:43.747506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:42.522 [2024-04-17 15:37:43.752304] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.522 [2024-04-17 15:37:43.752617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.522 [2024-04-17 15:37:43.752652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:42.522 [2024-04-17 15:37:43.757616] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.522 [2024-04-17 15:37:43.757924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.522 [2024-04-17 15:37:43.757949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:42.522 [2024-04-17 15:37:43.762945] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.522 [2024-04-17 15:37:43.763250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.522 [2024-04-17 15:37:43.763287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:42.522 [2024-04-17 15:37:43.768145] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.522 [2024-04-17 15:37:43.768444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.522 [2024-04-17 15:37:43.768481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:42.522 [2024-04-17 15:37:43.773395] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.522 [2024-04-17 15:37:43.773694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.522 [2024-04-17 15:37:43.773731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:42.522 [2024-04-17 15:37:43.778645] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.522 [2024-04-17 15:37:43.778957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.522 [2024-04-17 15:37:43.778993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:42.522 [2024-04-17 15:37:43.783900] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.522 [2024-04-17 15:37:43.784196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.522 [2024-04-17 15:37:43.784225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:42.522 [2024-04-17 15:37:43.789214] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.522 [2024-04-17 15:37:43.789525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.522 [2024-04-17 15:37:43.789560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:42.522 [2024-04-17 15:37:43.794419] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.522 [2024-04-17 15:37:43.794728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.522 [2024-04-17 15:37:43.794780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:42.522 [2024-04-17 15:37:43.799689] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.522 [2024-04-17 15:37:43.800028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.522 [2024-04-17 15:37:43.800066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:42.522 [2024-04-17 15:37:43.804827] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.522 [2024-04-17 15:37:43.805134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.522 [2024-04-17 15:37:43.805169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:42.522 [2024-04-17 15:37:43.810025] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.522 [2024-04-17 15:37:43.810321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.522 [2024-04-17 15:37:43.810354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:42.522 [2024-04-17 15:37:43.815253] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.522 [2024-04-17 15:37:43.815558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.522 [2024-04-17 15:37:43.815597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:42.522 [2024-04-17 15:37:43.820651] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.522 [2024-04-17 15:37:43.820977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.522 [2024-04-17 15:37:43.821013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:42.522 [2024-04-17 15:37:43.826003] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.522 [2024-04-17 15:37:43.826324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.522 [2024-04-17 15:37:43.826367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:42.522 [2024-04-17 15:37:43.831435] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.522 [2024-04-17 15:37:43.831765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.522 [2024-04-17 15:37:43.831808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:42.522 [2024-04-17 15:37:43.836860] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.522 [2024-04-17 15:37:43.837172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.523 [2024-04-17 15:37:43.837214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:42.523 [2024-04-17 15:37:43.842101] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.523 [2024-04-17 15:37:43.842396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.523 [2024-04-17 15:37:43.842439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:42.523 [2024-04-17 15:37:43.847402] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.523 [2024-04-17 15:37:43.847728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.523 [2024-04-17 15:37:43.847772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:42.523 [2024-04-17 15:37:43.852505] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.523 [2024-04-17 15:37:43.852856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.523 [2024-04-17 15:37:43.852900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:42.523 [2024-04-17 15:37:43.857668] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.523 [2024-04-17 15:37:43.858016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.523 [2024-04-17 15:37:43.858050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:42.523 [2024-04-17 15:37:43.862738] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.523 [2024-04-17 15:37:43.863059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.523 [2024-04-17 15:37:43.863104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:42.523 [2024-04-17 15:37:43.867976] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.523 [2024-04-17 15:37:43.868311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.523 [2024-04-17 15:37:43.868346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:42.523 [2024-04-17 15:37:43.873225] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.523 [2024-04-17 15:37:43.873549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.523 [2024-04-17 15:37:43.873583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:42.523 [2024-04-17 15:37:43.878378] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.523 [2024-04-17 15:37:43.878704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.523 [2024-04-17 15:37:43.878760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:42.523 [2024-04-17 15:37:43.883613] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.523 [2024-04-17 15:37:43.883956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.523 [2024-04-17 15:37:43.883991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:42.523 [2024-04-17 15:37:43.888934] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.523 [2024-04-17 15:37:43.889232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.523 [2024-04-17 15:37:43.889276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:42.523 [2024-04-17 15:37:43.894223] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.523 [2024-04-17 15:37:43.894537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.523 [2024-04-17 15:37:43.894571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:42.523 [2024-04-17 15:37:43.899583] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.523 [2024-04-17 15:37:43.899931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.523 [2024-04-17 15:37:43.899968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:42.523 [2024-04-17 15:37:43.904859] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.523 [2024-04-17 15:37:43.905196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.523 [2024-04-17 15:37:43.905231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:42.523 [2024-04-17 15:37:43.910106] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.523 [2024-04-17 15:37:43.910441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.523 [2024-04-17 15:37:43.910477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:42.523 [2024-04-17 15:37:43.915528] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.523 [2024-04-17 15:37:43.915850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.523 [2024-04-17 15:37:43.915883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:42.523 [2024-04-17 15:37:43.920746] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.523 [2024-04-17 15:37:43.921091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.523 [2024-04-17 15:37:43.921125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:42.523 [2024-04-17 15:37:43.926046] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.523 [2024-04-17 15:37:43.926373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.523 [2024-04-17 15:37:43.926407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:42.523 [2024-04-17 15:37:43.931362] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.523 [2024-04-17 15:37:43.931718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.523 [2024-04-17 15:37:43.931786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:42.523 [2024-04-17 15:37:43.937055] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.523 [2024-04-17 15:37:43.937410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.523 [2024-04-17 15:37:43.937457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:42.523 [2024-04-17 15:37:43.942324] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.523 [2024-04-17 15:37:43.942665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.523 [2024-04-17 15:37:43.942699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:42.523 [2024-04-17 15:37:43.947641] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.523 [2024-04-17 15:37:43.947996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.523 [2024-04-17 15:37:43.948043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:42.523 [2024-04-17 15:37:43.953038] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.523 [2024-04-17 15:37:43.953359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.523 [2024-04-17 15:37:43.953394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:42.523 [2024-04-17 15:37:43.958302] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.523 [2024-04-17 15:37:43.958638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.523 [2024-04-17 15:37:43.958685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:42.784 [2024-04-17 15:37:43.963527] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.784 [2024-04-17 15:37:43.963876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.784 [2024-04-17 15:37:43.963918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:42.784 [2024-04-17 15:37:43.968738] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.784 [2024-04-17 15:37:43.969081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.784 [2024-04-17 15:37:43.969125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:42.784 [2024-04-17 15:37:43.974072] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.784 [2024-04-17 15:37:43.974384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.784 [2024-04-17 15:37:43.974418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:42.784 [2024-04-17 15:37:43.979310] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.784 [2024-04-17 15:37:43.979653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.784 [2024-04-17 15:37:43.979687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:42.784 [2024-04-17 15:37:43.984551] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.784 [2024-04-17 15:37:43.984891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.784 [2024-04-17 15:37:43.984924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:42.784 [2024-04-17 15:37:43.989860] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.784 [2024-04-17 15:37:43.990169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.784 [2024-04-17 15:37:43.990202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:42.784 [2024-04-17 15:37:43.995152] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.784 [2024-04-17 15:37:43.995450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.784 [2024-04-17 15:37:43.995484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:42.784 [2024-04-17 15:37:44.000498] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.784 [2024-04-17 15:37:44.000845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.784 [2024-04-17 15:37:44.000879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:42.784 [2024-04-17 15:37:44.005866] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.785 [2024-04-17 15:37:44.006218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.785 [2024-04-17 15:37:44.006253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:42.785 [2024-04-17 15:37:44.011482] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.785 [2024-04-17 15:37:44.011825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.785 [2024-04-17 15:37:44.011874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:42.785 [2024-04-17 15:37:44.016905] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.785 [2024-04-17 15:37:44.017232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.785 [2024-04-17 15:37:44.017279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:42.785 [2024-04-17 15:37:44.022168] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.785 [2024-04-17 15:37:44.022479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.785 [2024-04-17 15:37:44.022514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:42.785 [2024-04-17 15:37:44.027471] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.785 [2024-04-17 15:37:44.027806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.785 [2024-04-17 15:37:44.027851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:42.785 [2024-04-17 15:37:44.032764] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.785 [2024-04-17 15:37:44.033089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.785 [2024-04-17 15:37:44.033123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:42.785 [2024-04-17 15:37:44.038031] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.785 [2024-04-17 15:37:44.038332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.785 [2024-04-17 15:37:44.038366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:42.785 [2024-04-17 15:37:44.043312] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.785 [2024-04-17 15:37:44.043602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.785 [2024-04-17 15:37:44.043636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:42.785 [2024-04-17 15:37:44.048644] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.785 [2024-04-17 15:37:44.048969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.785 [2024-04-17 15:37:44.049008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:42.785 [2024-04-17 15:37:44.053963] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.785 [2024-04-17 15:37:44.054256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.785 [2024-04-17 15:37:44.054289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:42.785 [2024-04-17 15:37:44.059253] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.785 [2024-04-17 15:37:44.059561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.785 [2024-04-17 15:37:44.059601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:42.785 [2024-04-17 15:37:44.064523] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.785 [2024-04-17 15:37:44.064833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.785 [2024-04-17 15:37:44.064857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:42.785 [2024-04-17 15:37:44.069733] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.785 [2024-04-17 15:37:44.070052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.785 [2024-04-17 15:37:44.070087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:42.785 [2024-04-17 15:37:44.074986] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.785 [2024-04-17 15:37:44.075290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.785 [2024-04-17 15:37:44.075322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:42.785 [2024-04-17 15:37:44.080196] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.785 [2024-04-17 15:37:44.080495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.785 [2024-04-17 15:37:44.080529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:42.785 [2024-04-17 15:37:44.085383] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.785 [2024-04-17 15:37:44.085682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.785 [2024-04-17 15:37:44.085715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:42.785 [2024-04-17 15:37:44.090624] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.785 [2024-04-17 15:37:44.090937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.785 [2024-04-17 15:37:44.090978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:42.785 [2024-04-17 15:37:44.095808] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.785 [2024-04-17 15:37:44.096104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.785 [2024-04-17 15:37:44.096146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:42.785 [2024-04-17 15:37:44.101040] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.785 [2024-04-17 15:37:44.101335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.785 [2024-04-17 15:37:44.101381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:42.785 [2024-04-17 15:37:44.106304] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.785 [2024-04-17 15:37:44.106618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.785 [2024-04-17 15:37:44.106651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:42.785 [2024-04-17 15:37:44.111583] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.785 [2024-04-17 15:37:44.111910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.785 [2024-04-17 15:37:44.111943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:42.785 [2024-04-17 15:37:44.116899] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.785 [2024-04-17 15:37:44.117196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.785 [2024-04-17 15:37:44.117237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:42.785 [2024-04-17 15:37:44.122072] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.785 [2024-04-17 15:37:44.122382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.785 [2024-04-17 15:37:44.122418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:42.785 [2024-04-17 15:37:44.127457] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.785 [2024-04-17 15:37:44.127778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.785 [2024-04-17 15:37:44.127821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:42.785 [2024-04-17 15:37:44.132842] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.785 [2024-04-17 15:37:44.133139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.785 [2024-04-17 15:37:44.133171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:42.785 [2024-04-17 15:37:44.138225] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.785 [2024-04-17 15:37:44.138537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.785 [2024-04-17 15:37:44.138572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:42.785 [2024-04-17 15:37:44.144208] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.785 [2024-04-17 15:37:44.144539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.785 [2024-04-17 15:37:44.144582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:42.785 [2024-04-17 15:37:44.149546] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.785 [2024-04-17 15:37:44.149881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.786 [2024-04-17 15:37:44.149918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:42.786 [2024-04-17 15:37:44.154808] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.786 [2024-04-17 15:37:44.155148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.786 [2024-04-17 15:37:44.155183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:42.786 [2024-04-17 15:37:44.160114] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.786 [2024-04-17 15:37:44.160411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.786 [2024-04-17 15:37:44.160444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:42.786 [2024-04-17 15:37:44.165321] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.786 [2024-04-17 15:37:44.165618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.786 [2024-04-17 15:37:44.165654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:42.786 [2024-04-17 15:37:44.170614] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.786 [2024-04-17 15:37:44.170922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.786 [2024-04-17 15:37:44.170955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:42.786 [2024-04-17 15:37:44.175835] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.786 [2024-04-17 15:37:44.176132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.786 [2024-04-17 15:37:44.176165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:42.786 [2024-04-17 15:37:44.181089] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.786 [2024-04-17 15:37:44.181384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.786 [2024-04-17 15:37:44.181421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:42.786 [2024-04-17 15:37:44.186285] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.786 [2024-04-17 15:37:44.186580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.786 [2024-04-17 15:37:44.186614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:42.786 [2024-04-17 15:37:44.191567] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.786 [2024-04-17 15:37:44.191880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.786 [2024-04-17 15:37:44.191909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:42.786 [2024-04-17 15:37:44.196893] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.786 [2024-04-17 15:37:44.197191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.786 [2024-04-17 15:37:44.197220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:42.786 [2024-04-17 15:37:44.202286] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.786 [2024-04-17 15:37:44.202593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.786 [2024-04-17 15:37:44.202627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:42.786 [2024-04-17 15:37:44.207651] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.786 [2024-04-17 15:37:44.207962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.786 [2024-04-17 15:37:44.207995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:42.786 [2024-04-17 15:37:44.212994] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.786 [2024-04-17 15:37:44.213291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.786 [2024-04-17 15:37:44.213320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:42.786 [2024-04-17 15:37:44.218475] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.786 [2024-04-17 15:37:44.218793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.786 [2024-04-17 15:37:44.218835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:42.786 [2024-04-17 15:37:44.223928] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:42.786 [2024-04-17 15:37:44.224281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.786 [2024-04-17 15:37:44.224315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:43.045 [2024-04-17 15:37:44.229376] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.045 [2024-04-17 15:37:44.229670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.045 [2024-04-17 15:37:44.229704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:43.045 [2024-04-17 15:37:44.234778] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.045 [2024-04-17 15:37:44.235120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.045 [2024-04-17 15:37:44.235152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:43.045 [2024-04-17 15:37:44.240115] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.045 [2024-04-17 15:37:44.240472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.045 [2024-04-17 15:37:44.240506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:43.045 [2024-04-17 15:37:44.245599] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.045 [2024-04-17 15:37:44.245946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.045 [2024-04-17 15:37:44.245987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:43.045 [2024-04-17 15:37:44.251083] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.045 [2024-04-17 15:37:44.251391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.045 [2024-04-17 15:37:44.251423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:43.045 [2024-04-17 15:37:44.256448] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.045 [2024-04-17 15:37:44.256743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.045 [2024-04-17 15:37:44.256785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:43.045 [2024-04-17 15:37:44.261755] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.045 [2024-04-17 15:37:44.262149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.046 [2024-04-17 15:37:44.262182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:43.046 [2024-04-17 15:37:44.267098] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.046 [2024-04-17 15:37:44.267393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.046 [2024-04-17 15:37:44.267429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:43.046 [2024-04-17 15:37:44.272543] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.046 [2024-04-17 15:37:44.272849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.046 [2024-04-17 15:37:44.272881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:43.046 [2024-04-17 15:37:44.277855] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.046 [2024-04-17 15:37:44.278165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.046 [2024-04-17 15:37:44.278194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:43.046 [2024-04-17 15:37:44.283148] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.046 [2024-04-17 15:37:44.283449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.046 [2024-04-17 15:37:44.283482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:43.046 [2024-04-17 15:37:44.288569] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.046 [2024-04-17 15:37:44.288887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.046 [2024-04-17 15:37:44.288919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:43.046 [2024-04-17 15:37:44.294048] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.046 [2024-04-17 15:37:44.294368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.046 [2024-04-17 15:37:44.294400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:43.046 [2024-04-17 15:37:44.299333] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.046 [2024-04-17 15:37:44.299664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.046 [2024-04-17 15:37:44.299697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:43.046 [2024-04-17 15:37:44.304659] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.046 [2024-04-17 15:37:44.305018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.046 [2024-04-17 15:37:44.305054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:43.046 [2024-04-17 15:37:44.310048] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.046 [2024-04-17 15:37:44.310392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.046 [2024-04-17 15:37:44.310425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:43.046 [2024-04-17 15:37:44.315506] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.046 [2024-04-17 15:37:44.315851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.046 [2024-04-17 15:37:44.315898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:43.046 [2024-04-17 15:37:44.321043] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.046 [2024-04-17 15:37:44.321353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.046 [2024-04-17 15:37:44.321385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:43.046 [2024-04-17 15:37:44.326461] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.046 [2024-04-17 15:37:44.326771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.046 [2024-04-17 15:37:44.326817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:43.046 [2024-04-17 15:37:44.331795] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.046 [2024-04-17 15:37:44.332102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.046 [2024-04-17 15:37:44.332137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:43.046 [2024-04-17 15:37:44.337011] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.046 [2024-04-17 15:37:44.337312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.046 [2024-04-17 15:37:44.337346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:43.046 [2024-04-17 15:37:44.342269] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.046 [2024-04-17 15:37:44.342574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.046 [2024-04-17 15:37:44.342607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:43.046 [2024-04-17 15:37:44.347470] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.046 [2024-04-17 15:37:44.347808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.046 [2024-04-17 15:37:44.347840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:43.046 [2024-04-17 15:37:44.352790] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.046 [2024-04-17 15:37:44.353099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.046 [2024-04-17 15:37:44.353132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:43.046 [2024-04-17 15:37:44.357976] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.046 [2024-04-17 15:37:44.358293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.046 [2024-04-17 15:37:44.358326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:43.046 [2024-04-17 15:37:44.363250] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.046 [2024-04-17 15:37:44.363557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.046 [2024-04-17 15:37:44.363589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:43.046 [2024-04-17 15:37:44.368502] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.046 [2024-04-17 15:37:44.368826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.046 [2024-04-17 15:37:44.368858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:43.046 [2024-04-17 15:37:44.373744] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.046 [2024-04-17 15:37:44.374083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.046 [2024-04-17 15:37:44.374116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:43.046 [2024-04-17 15:37:44.379130] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.046 [2024-04-17 15:37:44.379424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.046 [2024-04-17 15:37:44.379457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:43.046 [2024-04-17 15:37:44.384330] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.047 [2024-04-17 15:37:44.384628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.047 [2024-04-17 15:37:44.384662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:43.047 [2024-04-17 15:37:44.389487] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.047 [2024-04-17 15:37:44.389796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.047 [2024-04-17 15:37:44.389828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:43.047 [2024-04-17 15:37:44.394801] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.047 [2024-04-17 15:37:44.395116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.047 [2024-04-17 15:37:44.395156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:43.047 [2024-04-17 15:37:44.400130] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.047 [2024-04-17 15:37:44.400425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.047 [2024-04-17 15:37:44.400458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:43.047 [2024-04-17 15:37:44.405366] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.047 [2024-04-17 15:37:44.405660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.047 [2024-04-17 15:37:44.405693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:43.047 [2024-04-17 15:37:44.411177] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.047 [2024-04-17 15:37:44.411473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.047 [2024-04-17 15:37:44.411507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:43.047 [2024-04-17 15:37:44.416478] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.047 [2024-04-17 15:37:44.416794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.047 [2024-04-17 15:37:44.416819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:43.047 [2024-04-17 15:37:44.421755] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.047 [2024-04-17 15:37:44.422079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.047 [2024-04-17 15:37:44.422111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:43.047 [2024-04-17 15:37:44.427131] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.047 [2024-04-17 15:37:44.427430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.047 [2024-04-17 15:37:44.427462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:43.047 [2024-04-17 15:37:44.432331] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.047 [2024-04-17 15:37:44.432626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.047 [2024-04-17 15:37:44.432659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:43.047 [2024-04-17 15:37:44.437772] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.047 [2024-04-17 15:37:44.438114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.047 [2024-04-17 15:37:44.438145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:43.047 [2024-04-17 15:37:44.443083] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.047 [2024-04-17 15:37:44.443379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.047 [2024-04-17 15:37:44.443411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:43.047 [2024-04-17 15:37:44.448375] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.047 [2024-04-17 15:37:44.448676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.047 [2024-04-17 15:37:44.448709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:43.047 [2024-04-17 15:37:44.453613] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.047 [2024-04-17 15:37:44.453943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.047 [2024-04-17 15:37:44.453974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:43.047 [2024-04-17 15:37:44.458841] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.047 [2024-04-17 15:37:44.459175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.047 [2024-04-17 15:37:44.459207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:43.047 [2024-04-17 15:37:44.464190] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.047 [2024-04-17 15:37:44.464509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.047 [2024-04-17 15:37:44.464532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:43.047 [2024-04-17 15:37:44.469630] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.047 [2024-04-17 15:37:44.469969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.047 [2024-04-17 15:37:44.470001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:43.047 [2024-04-17 15:37:44.475093] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.047 [2024-04-17 15:37:44.475407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.047 [2024-04-17 15:37:44.475441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:43.047 [2024-04-17 15:37:44.480339] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.047 [2024-04-17 15:37:44.480637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.047 [2024-04-17 15:37:44.480671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:43.047 [2024-04-17 15:37:44.485746] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.047 [2024-04-17 15:37:44.486063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.047 [2024-04-17 15:37:44.486096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:43.307 [2024-04-17 15:37:44.490972] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.307 [2024-04-17 15:37:44.491275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.307 [2024-04-17 15:37:44.491309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:43.307 [2024-04-17 15:37:44.496343] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.307 [2024-04-17 15:37:44.496682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.307 [2024-04-17 15:37:44.496715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:43.307 [2024-04-17 15:37:44.501636] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.307 [2024-04-17 15:37:44.501984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.307 [2024-04-17 15:37:44.502017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:43.307 [2024-04-17 15:37:44.506978] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.307 [2024-04-17 15:37:44.507286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.308 [2024-04-17 15:37:44.507318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:43.308 [2024-04-17 15:37:44.512351] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.308 [2024-04-17 15:37:44.512693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.308 [2024-04-17 15:37:44.512726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:43.308 [2024-04-17 15:37:44.517647] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.308 [2024-04-17 15:37:44.517971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.308 [2024-04-17 15:37:44.518004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:43.308 [2024-04-17 15:37:44.522960] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.308 [2024-04-17 15:37:44.523265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.308 [2024-04-17 15:37:44.523297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:43.308 [2024-04-17 15:37:44.528290] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.308 [2024-04-17 15:37:44.528637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.308 [2024-04-17 15:37:44.528670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:43.308 [2024-04-17 15:37:44.533573] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.308 [2024-04-17 15:37:44.533926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.308 [2024-04-17 15:37:44.533959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:43.308 [2024-04-17 15:37:44.539034] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.308 [2024-04-17 15:37:44.539330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.308 [2024-04-17 15:37:44.539363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:43.308 [2024-04-17 15:37:44.544333] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.308 [2024-04-17 15:37:44.544657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.308 [2024-04-17 15:37:44.544690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:43.308 [2024-04-17 15:37:44.549656] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.308 [2024-04-17 15:37:44.549991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.308 [2024-04-17 15:37:44.550024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:43.308 [2024-04-17 15:37:44.554890] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.308 [2024-04-17 15:37:44.555192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.308 [2024-04-17 15:37:44.555224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:43.308 [2024-04-17 15:37:44.560090] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.308 [2024-04-17 15:37:44.560387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.308 [2024-04-17 15:37:44.560420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:43.308 [2024-04-17 15:37:44.565445] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.308 [2024-04-17 15:37:44.565740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.308 [2024-04-17 15:37:44.565782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:43.308 [2024-04-17 15:37:44.570747] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.308 [2024-04-17 15:37:44.571066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.308 [2024-04-17 15:37:44.571102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:43.308 [2024-04-17 15:37:44.576105] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.308 [2024-04-17 15:37:44.576450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.308 [2024-04-17 15:37:44.576482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:43.308 [2024-04-17 15:37:44.581457] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.308 [2024-04-17 15:37:44.581782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.308 [2024-04-17 15:37:44.581824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:43.308 [2024-04-17 15:37:44.586729] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.308 [2024-04-17 15:37:44.587105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.308 [2024-04-17 15:37:44.587137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:43.308 [2024-04-17 15:37:44.592177] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.308 [2024-04-17 15:37:44.592501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.308 [2024-04-17 15:37:44.592534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:43.308 [2024-04-17 15:37:44.597510] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.308 [2024-04-17 15:37:44.597819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.308 [2024-04-17 15:37:44.597850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:43.308 [2024-04-17 15:37:44.602725] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.308 [2024-04-17 15:37:44.603062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.308 [2024-04-17 15:37:44.603095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:43.308 [2024-04-17 15:37:44.608033] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.308 [2024-04-17 15:37:44.608343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.308 [2024-04-17 15:37:44.608375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:43.308 [2024-04-17 15:37:44.613317] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.308 [2024-04-17 15:37:44.613644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.308 [2024-04-17 15:37:44.613677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:43.308 [2024-04-17 15:37:44.618644] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.308 [2024-04-17 15:37:44.618968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.308 [2024-04-17 15:37:44.619000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:43.308 [2024-04-17 15:37:44.623924] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.308 [2024-04-17 15:37:44.624237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.308 [2024-04-17 15:37:44.624270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:43.308 [2024-04-17 15:37:44.629125] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.308 [2024-04-17 15:37:44.629450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.308 [2024-04-17 15:37:44.629483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:43.308 [2024-04-17 15:37:44.634431] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.308 [2024-04-17 15:37:44.634750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.308 [2024-04-17 15:37:44.634789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:43.308 [2024-04-17 15:37:44.639624] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.308 [2024-04-17 15:37:44.639965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.308 [2024-04-17 15:37:44.639997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:43.308 [2024-04-17 15:37:44.644842] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.308 [2024-04-17 15:37:44.645183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.308 [2024-04-17 15:37:44.645216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:43.308 [2024-04-17 15:37:44.650284] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.309 [2024-04-17 15:37:44.650605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.309 [2024-04-17 15:37:44.650639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:43.309 [2024-04-17 15:37:44.655595] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.309 [2024-04-17 15:37:44.655919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.309 [2024-04-17 15:37:44.655951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:43.309 [2024-04-17 15:37:44.660825] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.309 [2024-04-17 15:37:44.661181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.309 [2024-04-17 15:37:44.661214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:43.309 [2024-04-17 15:37:44.666278] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.309 [2024-04-17 15:37:44.666618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.309 [2024-04-17 15:37:44.666652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:43.309 [2024-04-17 15:37:44.671781] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.309 [2024-04-17 15:37:44.672151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.309 [2024-04-17 15:37:44.672185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:43.309 [2024-04-17 15:37:44.677111] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.309 [2024-04-17 15:37:44.677461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.309 [2024-04-17 15:37:44.677495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:43.309 [2024-04-17 15:37:44.682329] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.309 [2024-04-17 15:37:44.682667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.309 [2024-04-17 15:37:44.682708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:43.309 [2024-04-17 15:37:44.687746] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.309 [2024-04-17 15:37:44.688103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.309 [2024-04-17 15:37:44.688136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:43.309 [2024-04-17 15:37:44.693154] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.309 [2024-04-17 15:37:44.693499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.309 [2024-04-17 15:37:44.693534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:43.309 [2024-04-17 15:37:44.698241] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.309 [2024-04-17 15:37:44.698583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.309 [2024-04-17 15:37:44.698617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:43.309 [2024-04-17 15:37:44.703509] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.309 [2024-04-17 15:37:44.703849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.309 [2024-04-17 15:37:44.703898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:43.309 [2024-04-17 15:37:44.708688] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.309 [2024-04-17 15:37:44.709048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.309 [2024-04-17 15:37:44.709081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:43.309 [2024-04-17 15:37:44.713814] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.309 [2024-04-17 15:37:44.714186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.309 [2024-04-17 15:37:44.714219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:43.309 [2024-04-17 15:37:44.718989] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.309 [2024-04-17 15:37:44.719334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.309 [2024-04-17 15:37:44.719367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:43.309 [2024-04-17 15:37:44.724283] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.309 [2024-04-17 15:37:44.724632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.309 [2024-04-17 15:37:44.724667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:43.309 [2024-04-17 15:37:44.729662] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.309 [2024-04-17 15:37:44.730011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.309 [2024-04-17 15:37:44.730035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:43.309 [2024-04-17 15:37:44.734970] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.309 [2024-04-17 15:37:44.735298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.309 [2024-04-17 15:37:44.735331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:43.309 [2024-04-17 15:37:44.740377] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.309 [2024-04-17 15:37:44.740751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.309 [2024-04-17 15:37:44.740790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:43.309 [2024-04-17 15:37:44.745763] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.309 [2024-04-17 15:37:44.746124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.309 [2024-04-17 15:37:44.746157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:43.569 [2024-04-17 15:37:44.751442] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.569 [2024-04-17 15:37:44.751742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.569 [2024-04-17 15:37:44.751786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:43.569 [2024-04-17 15:37:44.756847] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.569 [2024-04-17 15:37:44.757167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.569 [2024-04-17 15:37:44.757200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:43.569 [2024-04-17 15:37:44.762138] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.569 [2024-04-17 15:37:44.762461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.569 [2024-04-17 15:37:44.762495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:43.569 [2024-04-17 15:37:44.767327] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.569 [2024-04-17 15:37:44.767699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.569 [2024-04-17 15:37:44.767732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:43.569 [2024-04-17 15:37:44.772719] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.569 [2024-04-17 15:37:44.773092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.569 [2024-04-17 15:37:44.773125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:43.569 [2024-04-17 15:37:44.777972] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.569 [2024-04-17 15:37:44.778327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.570 [2024-04-17 15:37:44.778360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:43.570 [2024-04-17 15:37:44.783225] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.570 [2024-04-17 15:37:44.783561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.570 [2024-04-17 15:37:44.783594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:43.570 [2024-04-17 15:37:44.788667] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.570 [2024-04-17 15:37:44.788977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.570 [2024-04-17 15:37:44.789009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:43.570 [2024-04-17 15:37:44.794092] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.570 [2024-04-17 15:37:44.794415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.570 [2024-04-17 15:37:44.794450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:43.570 [2024-04-17 15:37:44.799541] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.570 [2024-04-17 15:37:44.799882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.570 [2024-04-17 15:37:44.799914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:43.570 [2024-04-17 15:37:44.804860] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.570 [2024-04-17 15:37:44.805156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.570 [2024-04-17 15:37:44.805189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:43.570 [2024-04-17 15:37:44.810347] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.570 [2024-04-17 15:37:44.810649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.570 [2024-04-17 15:37:44.810684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:43.570 [2024-04-17 15:37:44.815942] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.570 [2024-04-17 15:37:44.816286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.570 [2024-04-17 15:37:44.816320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:43.570 [2024-04-17 15:37:44.821366] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.570 [2024-04-17 15:37:44.821706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.570 [2024-04-17 15:37:44.821740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:43.570 [2024-04-17 15:37:44.826908] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.570 [2024-04-17 15:37:44.827216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.570 [2024-04-17 15:37:44.827248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:43.570 [2024-04-17 15:37:44.832505] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.570 [2024-04-17 15:37:44.832804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.570 [2024-04-17 15:37:44.832847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:43.570 [2024-04-17 15:37:44.837881] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.570 [2024-04-17 15:37:44.838176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.570 [2024-04-17 15:37:44.838208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:43.570 [2024-04-17 15:37:44.843329] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.570 [2024-04-17 15:37:44.843655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.570 [2024-04-17 15:37:44.843690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:43.570 [2024-04-17 15:37:44.848753] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.570 [2024-04-17 15:37:44.849063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.570 [2024-04-17 15:37:44.849096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:43.570 [2024-04-17 15:37:44.854112] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.570 [2024-04-17 15:37:44.854450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.570 [2024-04-17 15:37:44.854484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:43.570 [2024-04-17 15:37:44.859675] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.570 [2024-04-17 15:37:44.859989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.570 [2024-04-17 15:37:44.860021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:43.570 [2024-04-17 15:37:44.865056] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.570 [2024-04-17 15:37:44.865384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.570 [2024-04-17 15:37:44.865418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:43.570 [2024-04-17 15:37:44.870450] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.570 [2024-04-17 15:37:44.870768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.570 [2024-04-17 15:37:44.870812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:43.570 [2024-04-17 15:37:44.875872] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.570 [2024-04-17 15:37:44.876197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.570 [2024-04-17 15:37:44.876230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:43.570 [2024-04-17 15:37:44.881256] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.570 [2024-04-17 15:37:44.881556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.570 [2024-04-17 15:37:44.881590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:43.570 [2024-04-17 15:37:44.886705] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.570 [2024-04-17 15:37:44.887037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.570 [2024-04-17 15:37:44.887069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:43.570 [2024-04-17 15:37:44.892232] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.570 [2024-04-17 15:37:44.892575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.570 [2024-04-17 15:37:44.892608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:43.570 [2024-04-17 15:37:44.897616] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.570 [2024-04-17 15:37:44.897964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.570 [2024-04-17 15:37:44.897997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:43.570 [2024-04-17 15:37:44.902895] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.570 [2024-04-17 15:37:44.903237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.570 [2024-04-17 15:37:44.903274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:43.570 [2024-04-17 15:37:44.908169] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.570 [2024-04-17 15:37:44.908507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.570 [2024-04-17 15:37:44.908543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:43.570 [2024-04-17 15:37:44.913607] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.570 [2024-04-17 15:37:44.913934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.570 [2024-04-17 15:37:44.913974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:43.570 [2024-04-17 15:37:44.919168] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.570 [2024-04-17 15:37:44.919494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.570 [2024-04-17 15:37:44.919528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:43.570 [2024-04-17 15:37:44.924697] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.570 [2024-04-17 15:37:44.925030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.570 [2024-04-17 15:37:44.925063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:43.570 [2024-04-17 15:37:44.930039] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.570 [2024-04-17 15:37:44.930358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.570 [2024-04-17 15:37:44.930390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:43.570 [2024-04-17 15:37:44.935375] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.570 [2024-04-17 15:37:44.935697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.570 [2024-04-17 15:37:44.935731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:43.570 [2024-04-17 15:37:44.940834] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.570 [2024-04-17 15:37:44.941130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.570 [2024-04-17 15:37:44.941165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:43.570 [2024-04-17 15:37:44.946277] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.570 [2024-04-17 15:37:44.946612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.570 [2024-04-17 15:37:44.946646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:43.570 [2024-04-17 15:37:44.951726] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.570 [2024-04-17 15:37:44.952068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.570 [2024-04-17 15:37:44.952103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:43.570 [2024-04-17 15:37:44.957178] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.570 [2024-04-17 15:37:44.957516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.570 [2024-04-17 15:37:44.957549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:43.570 [2024-04-17 15:37:44.962604] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.570 [2024-04-17 15:37:44.962953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.570 [2024-04-17 15:37:44.962986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:43.570 [2024-04-17 15:37:44.967945] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.570 [2024-04-17 15:37:44.968280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.570 [2024-04-17 15:37:44.968314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:43.570 [2024-04-17 15:37:44.973381] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.570 [2024-04-17 15:37:44.973700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.570 [2024-04-17 15:37:44.973733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:43.570 [2024-04-17 15:37:44.978661] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.570 [2024-04-17 15:37:44.978997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.570 [2024-04-17 15:37:44.979046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:43.570 [2024-04-17 15:37:44.984147] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.570 [2024-04-17 15:37:44.984491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.570 [2024-04-17 15:37:44.984515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:43.570 [2024-04-17 15:37:44.989660] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.570 [2024-04-17 15:37:44.990005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.570 [2024-04-17 15:37:44.990041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:43.570 [2024-04-17 15:37:44.994943] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.570 [2024-04-17 15:37:44.995298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.570 [2024-04-17 15:37:44.995331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:43.570 [2024-04-17 15:37:45.000320] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.570 [2024-04-17 15:37:45.000665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.570 [2024-04-17 15:37:45.000698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:43.570 [2024-04-17 15:37:45.005879] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.570 [2024-04-17 15:37:45.006189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.570 [2024-04-17 15:37:45.006222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:43.830 [2024-04-17 15:37:45.011217] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.830 [2024-04-17 15:37:45.011543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.830 [2024-04-17 15:37:45.011586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:43.830 [2024-04-17 15:37:45.016600] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.830 [2024-04-17 15:37:45.016923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.830 [2024-04-17 15:37:45.016956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:43.830 [2024-04-17 15:37:45.022100] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.830 [2024-04-17 15:37:45.022407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.830 [2024-04-17 15:37:45.022445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:43.830 [2024-04-17 15:37:45.027616] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.830 [2024-04-17 15:37:45.027971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.830 [2024-04-17 15:37:45.028005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:43.830 [2024-04-17 15:37:45.033030] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.830 [2024-04-17 15:37:45.033327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.830 [2024-04-17 15:37:45.033359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:43.830 [2024-04-17 15:37:45.038461] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.830 [2024-04-17 15:37:45.038786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.830 [2024-04-17 15:37:45.038835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:43.830 [2024-04-17 15:37:45.043998] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.830 [2024-04-17 15:37:45.044343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.830 [2024-04-17 15:37:45.044376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:43.830 [2024-04-17 15:37:45.049401] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.830 [2024-04-17 15:37:45.049714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.830 [2024-04-17 15:37:45.049747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:43.830 [2024-04-17 15:37:45.054830] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.830 [2024-04-17 15:37:45.055155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.830 [2024-04-17 15:37:45.055188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:43.830 [2024-04-17 15:37:45.060135] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.830 [2024-04-17 15:37:45.060456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.830 [2024-04-17 15:37:45.060493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:43.830 [2024-04-17 15:37:45.065390] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.830 [2024-04-17 15:37:45.065754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.830 [2024-04-17 15:37:45.065798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:43.830 [2024-04-17 15:37:45.070882] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.830 [2024-04-17 15:37:45.071227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.830 [2024-04-17 15:37:45.071260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:43.830 [2024-04-17 15:37:45.076220] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.830 [2024-04-17 15:37:45.076567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.830 [2024-04-17 15:37:45.076600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:43.830 [2024-04-17 15:37:45.081513] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.830 [2024-04-17 15:37:45.081883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.830 [2024-04-17 15:37:45.081914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:43.830 [2024-04-17 15:37:45.087001] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.830 [2024-04-17 15:37:45.087324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.830 [2024-04-17 15:37:45.087364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:43.830 [2024-04-17 15:37:45.092500] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.830 [2024-04-17 15:37:45.092870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.830 [2024-04-17 15:37:45.092901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:43.831 [2024-04-17 15:37:45.097912] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.831 [2024-04-17 15:37:45.098252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.831 [2024-04-17 15:37:45.098285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:43.831 [2024-04-17 15:37:45.103197] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.831 [2024-04-17 15:37:45.103523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.831 [2024-04-17 15:37:45.103565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:43.831 [2024-04-17 15:37:45.108603] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.831 [2024-04-17 15:37:45.108966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.831 [2024-04-17 15:37:45.108999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:43.831 [2024-04-17 15:37:45.114056] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.831 [2024-04-17 15:37:45.114404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.831 [2024-04-17 15:37:45.114441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:43.831 [2024-04-17 15:37:45.119443] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.831 [2024-04-17 15:37:45.119770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.831 [2024-04-17 15:37:45.119809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:43.831 [2024-04-17 15:37:45.125060] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.831 [2024-04-17 15:37:45.125423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.831 [2024-04-17 15:37:45.125456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:43.831 [2024-04-17 15:37:45.130539] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.831 [2024-04-17 15:37:45.130880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.831 [2024-04-17 15:37:45.130912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:43.831 [2024-04-17 15:37:45.135947] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.831 [2024-04-17 15:37:45.136240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.831 [2024-04-17 15:37:45.136274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:43.831 [2024-04-17 15:37:45.141288] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.831 [2024-04-17 15:37:45.141612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.831 [2024-04-17 15:37:45.141645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:43.831 [2024-04-17 15:37:45.146571] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.831 [2024-04-17 15:37:45.146880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.831 [2024-04-17 15:37:45.146912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:43.831 [2024-04-17 15:37:45.151885] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.831 [2024-04-17 15:37:45.152208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.831 [2024-04-17 15:37:45.152241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:43.831 [2024-04-17 15:37:45.157288] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.831 [2024-04-17 15:37:45.157620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.831 [2024-04-17 15:37:45.157653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:43.831 [2024-04-17 15:37:45.162486] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.831 [2024-04-17 15:37:45.162829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.831 [2024-04-17 15:37:45.162861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:43.831 [2024-04-17 15:37:45.167929] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.831 [2024-04-17 15:37:45.168244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.831 [2024-04-17 15:37:45.168276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:43.831 [2024-04-17 15:37:45.173298] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.831 [2024-04-17 15:37:45.173628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.831 [2024-04-17 15:37:45.173661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:43.831 [2024-04-17 15:37:45.178678] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.831 [2024-04-17 15:37:45.178988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.831 [2024-04-17 15:37:45.179029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:43.831 [2024-04-17 15:37:45.184011] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.831 [2024-04-17 15:37:45.184373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.831 [2024-04-17 15:37:45.184406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:43.831 [2024-04-17 15:37:45.189304] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.831 [2024-04-17 15:37:45.189649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.831 [2024-04-17 15:37:45.189683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:43.831 [2024-04-17 15:37:45.194603] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.831 [2024-04-17 15:37:45.194936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.831 [2024-04-17 15:37:45.194965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:43.831 [2024-04-17 15:37:45.200078] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.831 [2024-04-17 15:37:45.200402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.831 [2024-04-17 15:37:45.200434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:43.831 [2024-04-17 15:37:45.205560] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.831 [2024-04-17 15:37:45.205884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.831 [2024-04-17 15:37:45.205907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:43.831 [2024-04-17 15:37:45.210939] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.831 [2024-04-17 15:37:45.211247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.831 [2024-04-17 15:37:45.211279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:43.831 [2024-04-17 15:37:45.216405] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.831 [2024-04-17 15:37:45.216737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.831 [2024-04-17 15:37:45.216780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:43.831 [2024-04-17 15:37:45.221747] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.831 [2024-04-17 15:37:45.222083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.831 [2024-04-17 15:37:45.222116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:43.831 [2024-04-17 15:37:45.227175] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.831 [2024-04-17 15:37:45.227486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.831 [2024-04-17 15:37:45.227521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:43.831 [2024-04-17 15:37:45.232590] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.831 [2024-04-17 15:37:45.232916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.831 [2024-04-17 15:37:45.232948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:43.831 [2024-04-17 15:37:45.238053] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.831 [2024-04-17 15:37:45.238379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.831 [2024-04-17 15:37:45.238409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:43.831 [2024-04-17 15:37:45.243598] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.831 [2024-04-17 15:37:45.243937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.831 [2024-04-17 15:37:45.243971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:43.831 [2024-04-17 15:37:45.249044] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.831 [2024-04-17 15:37:45.249390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.831 [2024-04-17 15:37:45.249423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:43.831 [2024-04-17 15:37:45.254550] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.831 [2024-04-17 15:37:45.254873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.831 [2024-04-17 15:37:45.254905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:43.831 [2024-04-17 15:37:45.260175] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.831 [2024-04-17 15:37:45.260512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.831 [2024-04-17 15:37:45.260554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:43.831 [2024-04-17 15:37:45.265709] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.831 [2024-04-17 15:37:45.266060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.831 [2024-04-17 15:37:45.266094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:43.831 [2024-04-17 15:37:45.271226] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:43.831 [2024-04-17 15:37:45.271518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.092 [2024-04-17 15:37:45.271547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:44.092 [2024-04-17 15:37:45.276521] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:44.092 [2024-04-17 15:37:45.276912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.092 [2024-04-17 15:37:45.276943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:44.092 [2024-04-17 15:37:45.281848] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:44.092 [2024-04-17 15:37:45.282186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.092 [2024-04-17 15:37:45.282219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:44.092 [2024-04-17 15:37:45.287450] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:44.092 [2024-04-17 15:37:45.287790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.092 [2024-04-17 15:37:45.287833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:44.092 [2024-04-17 15:37:45.292880] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:44.092 [2024-04-17 15:37:45.293206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.092 [2024-04-17 15:37:45.293239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:44.092 [2024-04-17 15:37:45.298285] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:44.092 [2024-04-17 15:37:45.298602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.092 [2024-04-17 15:37:45.298635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:44.092 [2024-04-17 15:37:45.303626] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:44.092 [2024-04-17 15:37:45.303940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.092 [2024-04-17 15:37:45.303973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:44.092 [2024-04-17 15:37:45.308949] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:44.092 [2024-04-17 15:37:45.309250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.092 [2024-04-17 15:37:45.309283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:44.092 [2024-04-17 15:37:45.314266] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:44.092 [2024-04-17 15:37:45.314560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.092 [2024-04-17 15:37:45.314594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:44.092 [2024-04-17 15:37:45.319721] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:44.092 [2024-04-17 15:37:45.320044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.092 [2024-04-17 15:37:45.320077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:44.092 [2024-04-17 15:37:45.325199] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:44.092 [2024-04-17 15:37:45.325497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.092 [2024-04-17 15:37:45.325531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:44.092 [2024-04-17 15:37:45.330403] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:44.092 [2024-04-17 15:37:45.330706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.092 [2024-04-17 15:37:45.330739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:44.092 [2024-04-17 15:37:45.335633] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:44.092 [2024-04-17 15:37:45.335945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.092 [2024-04-17 15:37:45.335973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:44.092 [2024-04-17 15:37:45.341020] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:44.092 [2024-04-17 15:37:45.341333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.092 [2024-04-17 15:37:45.341366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:44.092 [2024-04-17 15:37:45.346444] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:44.092 [2024-04-17 15:37:45.346741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.092 [2024-04-17 15:37:45.346781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:44.092 [2024-04-17 15:37:45.351723] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:44.092 [2024-04-17 15:37:45.352034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.092 [2024-04-17 15:37:45.352068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:44.092 [2024-04-17 15:37:45.357109] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:44.092 [2024-04-17 15:37:45.357406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.092 [2024-04-17 15:37:45.357439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:44.092 [2024-04-17 15:37:45.362233] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:44.092 [2024-04-17 15:37:45.362576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.092 [2024-04-17 15:37:45.362611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:44.092 [2024-04-17 15:37:45.367619] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:44.092 [2024-04-17 15:37:45.367975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.092 [2024-04-17 15:37:45.368008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:44.092 [2024-04-17 15:37:45.372830] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:44.092 [2024-04-17 15:37:45.373138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.092 [2024-04-17 15:37:45.373171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:44.092 [2024-04-17 15:37:45.378055] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:44.092 [2024-04-17 15:37:45.378397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.092 [2024-04-17 15:37:45.378430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:44.092 [2024-04-17 15:37:45.383270] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:44.092 [2024-04-17 15:37:45.383595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.092 [2024-04-17 15:37:45.383628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:44.092 [2024-04-17 15:37:45.388494] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:44.092 [2024-04-17 15:37:45.388849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.092 [2024-04-17 15:37:45.388882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:44.092 [2024-04-17 15:37:45.393746] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:44.092 [2024-04-17 15:37:45.394080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.092 [2024-04-17 15:37:45.394113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:44.092 [2024-04-17 15:37:45.399292] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:44.092 [2024-04-17 15:37:45.399656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.092 [2024-04-17 15:37:45.399689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:44.092 [2024-04-17 15:37:45.404768] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:44.092 [2024-04-17 15:37:45.405087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.092 [2024-04-17 15:37:45.405119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:44.092 [2024-04-17 15:37:45.410337] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:44.092 [2024-04-17 15:37:45.410699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.092 [2024-04-17 15:37:45.410734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:44.093 [2024-04-17 15:37:45.415906] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:44.093 [2024-04-17 15:37:45.416206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.093 [2024-04-17 15:37:45.416240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:44.093 [2024-04-17 15:37:45.421290] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:44.093 [2024-04-17 15:37:45.421656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.093 [2024-04-17 15:37:45.421690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:44.093 [2024-04-17 15:37:45.426579] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:44.093 [2024-04-17 15:37:45.426927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.093 [2024-04-17 15:37:45.426959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:44.093 [2024-04-17 15:37:45.432285] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:44.093 [2024-04-17 15:37:45.432583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.093 [2024-04-17 15:37:45.432617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:44.093 [2024-04-17 15:37:45.437780] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:44.093 [2024-04-17 15:37:45.438108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.093 [2024-04-17 15:37:45.438141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:44.093 [2024-04-17 15:37:45.443307] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:44.093 [2024-04-17 15:37:45.443606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.093 [2024-04-17 15:37:45.443640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:44.093 [2024-04-17 15:37:45.448806] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:44.093 [2024-04-17 15:37:45.449100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.093 [2024-04-17 15:37:45.449133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:44.093 [2024-04-17 15:37:45.454253] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1386d00) with pdu=0x2000190fef90 00:14:44.093 [2024-04-17 15:37:45.454549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.093 [2024-04-17 15:37:45.454583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:44.093 00:14:44.093 Latency(us) 00:14:44.093 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.093 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:14:44.093 nvme0n1 : 2.00 5796.70 724.59 0.00 0.00 2754.42 2323.55 5987.61 00:14:44.093 =================================================================================================================== 00:14:44.093 Total : 5796.70 724.59 0.00 0.00 2754.42 2323.55 5987.61 00:14:44.093 0 00:14:44.093 15:37:45 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:14:44.093 15:37:45 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:14:44.093 15:37:45 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:14:44.093 15:37:45 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:14:44.093 | .driver_specific 00:14:44.093 | .nvme_error 00:14:44.093 | .status_code 00:14:44.093 | .command_transient_transport_error' 00:14:44.357 15:37:45 -- host/digest.sh@71 -- # (( 374 > 0 )) 00:14:44.357 15:37:45 -- host/digest.sh@73 -- # killprocess 76784 00:14:44.357 15:37:45 -- common/autotest_common.sh@936 -- # '[' -z 76784 ']' 00:14:44.357 15:37:45 -- common/autotest_common.sh@940 -- # kill -0 76784 00:14:44.357 15:37:45 -- common/autotest_common.sh@941 -- # uname 00:14:44.357 15:37:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:44.357 15:37:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76784 00:14:44.357 15:37:45 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:44.357 killing process with pid 76784 00:14:44.357 Received shutdown signal, test time was about 2.000000 seconds 00:14:44.357 00:14:44.357 Latency(us) 00:14:44.357 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.357 =================================================================================================================== 00:14:44.357 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:44.357 15:37:45 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:44.357 15:37:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76784' 00:14:44.357 15:37:45 -- common/autotest_common.sh@955 -- # kill 76784 00:14:44.357 15:37:45 -- common/autotest_common.sh@960 -- # wait 76784 00:14:44.923 15:37:46 -- host/digest.sh@116 -- # killprocess 76567 00:14:44.923 15:37:46 -- common/autotest_common.sh@936 -- # '[' -z 76567 ']' 00:14:44.923 15:37:46 -- common/autotest_common.sh@940 -- # kill -0 76567 00:14:44.923 15:37:46 -- common/autotest_common.sh@941 -- # uname 00:14:44.923 15:37:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:44.923 15:37:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76567 00:14:44.923 15:37:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:44.923 killing process with pid 76567 00:14:44.923 15:37:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:44.923 15:37:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76567' 00:14:44.923 15:37:46 -- common/autotest_common.sh@955 -- # kill 76567 00:14:44.923 15:37:46 -- common/autotest_common.sh@960 -- # wait 76567 00:14:45.181 00:14:45.181 real 0m19.117s 00:14:45.181 user 0m36.585s 00:14:45.181 sys 0m4.995s 00:14:45.181 15:37:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:45.181 ************************************ 00:14:45.181 END TEST nvmf_digest_error 00:14:45.181 ************************************ 00:14:45.181 15:37:46 -- common/autotest_common.sh@10 -- # set +x 00:14:45.181 15:37:46 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:14:45.181 15:37:46 -- host/digest.sh@150 -- # nvmftestfini 00:14:45.181 15:37:46 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:45.181 15:37:46 -- nvmf/common.sh@117 -- # sync 00:14:45.181 15:37:46 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:45.439 15:37:46 -- nvmf/common.sh@120 -- # set +e 00:14:45.439 15:37:46 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:45.439 15:37:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:45.439 rmmod nvme_tcp 00:14:45.439 rmmod nvme_fabrics 00:14:45.439 rmmod nvme_keyring 00:14:45.439 15:37:46 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:45.439 15:37:46 -- nvmf/common.sh@124 -- # set -e 00:14:45.439 15:37:46 -- nvmf/common.sh@125 -- # return 0 00:14:45.439 15:37:46 -- nvmf/common.sh@478 -- # '[' -n 76567 ']' 00:14:45.439 15:37:46 -- nvmf/common.sh@479 -- # killprocess 76567 00:14:45.439 15:37:46 -- common/autotest_common.sh@936 -- # '[' -z 76567 ']' 00:14:45.439 Process with pid 76567 is not found 00:14:45.439 15:37:46 -- common/autotest_common.sh@940 -- # kill -0 76567 00:14:45.439 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (76567) - No such process 00:14:45.439 15:37:46 -- common/autotest_common.sh@963 -- # echo 'Process with pid 76567 is not found' 00:14:45.439 15:37:46 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:45.439 15:37:46 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:45.439 15:37:46 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:45.439 15:37:46 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:45.439 15:37:46 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:45.439 15:37:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.439 15:37:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:45.439 15:37:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.439 15:37:46 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:45.439 00:14:45.439 real 0m39.890s 00:14:45.439 user 1m15.483s 00:14:45.439 sys 0m10.319s 00:14:45.439 15:37:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:45.439 ************************************ 00:14:45.439 END TEST nvmf_digest 00:14:45.439 ************************************ 00:14:45.439 15:37:46 -- common/autotest_common.sh@10 -- # set +x 00:14:45.439 15:37:46 -- nvmf/nvmf.sh@108 -- # [[ 0 -eq 1 ]] 00:14:45.439 15:37:46 -- nvmf/nvmf.sh@113 -- # [[ 1 -eq 1 ]] 00:14:45.439 15:37:46 -- nvmf/nvmf.sh@114 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:14:45.439 15:37:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:45.439 15:37:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:45.439 15:37:46 -- common/autotest_common.sh@10 -- # set +x 00:14:45.439 ************************************ 00:14:45.440 START TEST nvmf_multipath 00:14:45.440 ************************************ 00:14:45.440 15:37:46 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:14:45.698 * Looking for test storage... 00:14:45.698 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:45.698 15:37:46 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:45.698 15:37:46 -- nvmf/common.sh@7 -- # uname -s 00:14:45.698 15:37:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:45.698 15:37:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:45.698 15:37:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:45.698 15:37:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:45.698 15:37:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:45.698 15:37:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:45.698 15:37:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:45.698 15:37:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:45.698 15:37:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:45.698 15:37:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:45.698 15:37:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02dfa913-00e4-4a25-ab2c-855f7283d4db 00:14:45.698 15:37:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=02dfa913-00e4-4a25-ab2c-855f7283d4db 00:14:45.698 15:37:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:45.698 15:37:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:45.698 15:37:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:45.698 15:37:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:45.698 15:37:46 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:45.698 15:37:46 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:45.698 15:37:46 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:45.698 15:37:46 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:45.698 15:37:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.698 15:37:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.698 15:37:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.698 15:37:46 -- paths/export.sh@5 -- # export PATH 00:14:45.698 15:37:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.698 15:37:46 -- nvmf/common.sh@47 -- # : 0 00:14:45.698 15:37:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:45.698 15:37:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:45.698 15:37:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:45.698 15:37:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:45.698 15:37:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:45.698 15:37:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:45.698 15:37:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:45.698 15:37:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:45.698 15:37:46 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:45.698 15:37:46 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:45.698 15:37:46 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:45.698 15:37:46 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:14:45.698 15:37:46 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:45.698 15:37:46 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:14:45.698 15:37:46 -- host/multipath.sh@30 -- # nvmftestinit 00:14:45.698 15:37:46 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:45.698 15:37:46 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:45.698 15:37:46 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:45.698 15:37:46 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:45.698 15:37:46 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:45.698 15:37:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.698 15:37:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:45.698 15:37:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.698 15:37:46 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:14:45.698 15:37:46 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:14:45.698 15:37:46 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:14:45.698 15:37:46 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:14:45.698 15:37:46 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:14:45.698 15:37:46 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:14:45.698 15:37:46 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:45.698 15:37:46 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:45.698 15:37:46 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:45.698 15:37:46 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:45.698 15:37:46 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:45.698 15:37:46 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:45.698 15:37:46 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:45.698 15:37:46 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:45.698 15:37:46 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:45.698 15:37:46 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:45.698 15:37:46 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:45.698 15:37:46 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:45.698 15:37:46 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:45.698 15:37:46 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:45.698 Cannot find device "nvmf_tgt_br" 00:14:45.698 15:37:47 -- nvmf/common.sh@155 -- # true 00:14:45.698 15:37:47 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:45.698 Cannot find device "nvmf_tgt_br2" 00:14:45.698 15:37:47 -- nvmf/common.sh@156 -- # true 00:14:45.698 15:37:47 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:45.698 15:37:47 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:45.698 Cannot find device "nvmf_tgt_br" 00:14:45.698 15:37:47 -- nvmf/common.sh@158 -- # true 00:14:45.698 15:37:47 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:45.698 Cannot find device "nvmf_tgt_br2" 00:14:45.698 15:37:47 -- nvmf/common.sh@159 -- # true 00:14:45.698 15:37:47 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:45.698 15:37:47 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:45.698 15:37:47 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:45.698 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:45.698 15:37:47 -- nvmf/common.sh@162 -- # true 00:14:45.698 15:37:47 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:45.698 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:45.698 15:37:47 -- nvmf/common.sh@163 -- # true 00:14:45.698 15:37:47 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:45.698 15:37:47 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:45.698 15:37:47 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:45.957 15:37:47 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:45.957 15:37:47 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:45.957 15:37:47 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:45.957 15:37:47 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:45.957 15:37:47 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:45.957 15:37:47 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:45.957 15:37:47 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:45.957 15:37:47 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:45.957 15:37:47 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:45.957 15:37:47 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:45.957 15:37:47 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:45.957 15:37:47 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:45.957 15:37:47 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:45.957 15:37:47 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:45.957 15:37:47 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:45.957 15:37:47 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:45.957 15:37:47 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:45.957 15:37:47 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:45.957 15:37:47 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:45.957 15:37:47 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:45.957 15:37:47 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:45.957 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:45.957 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:14:45.957 00:14:45.957 --- 10.0.0.2 ping statistics --- 00:14:45.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.957 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:14:45.957 15:37:47 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:45.957 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:45.957 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:14:45.957 00:14:45.957 --- 10.0.0.3 ping statistics --- 00:14:45.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.957 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:14:45.957 15:37:47 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:45.957 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:45.957 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:14:45.957 00:14:45.957 --- 10.0.0.1 ping statistics --- 00:14:45.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.957 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:14:45.957 15:37:47 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:45.957 15:37:47 -- nvmf/common.sh@422 -- # return 0 00:14:45.957 15:37:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:45.957 15:37:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:45.957 15:37:47 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:45.957 15:37:47 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:45.957 15:37:47 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:45.957 15:37:47 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:45.957 15:37:47 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:45.957 15:37:47 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:14:45.957 15:37:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:45.957 15:37:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:45.957 15:37:47 -- common/autotest_common.sh@10 -- # set +x 00:14:45.957 15:37:47 -- nvmf/common.sh@470 -- # nvmfpid=77058 00:14:45.957 15:37:47 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:45.957 15:37:47 -- nvmf/common.sh@471 -- # waitforlisten 77058 00:14:45.958 15:37:47 -- common/autotest_common.sh@817 -- # '[' -z 77058 ']' 00:14:45.958 15:37:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.958 15:37:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:45.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.958 15:37:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.958 15:37:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:45.958 15:37:47 -- common/autotest_common.sh@10 -- # set +x 00:14:46.216 [2024-04-17 15:37:47.424151] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:14:46.216 [2024-04-17 15:37:47.424272] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:46.216 [2024-04-17 15:37:47.565859] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:46.474 [2024-04-17 15:37:47.709855] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:46.474 [2024-04-17 15:37:47.709911] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:46.474 [2024-04-17 15:37:47.709923] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:46.474 [2024-04-17 15:37:47.709932] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:46.474 [2024-04-17 15:37:47.709940] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:46.474 [2024-04-17 15:37:47.710106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:46.474 [2024-04-17 15:37:47.710115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.041 15:37:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:47.041 15:37:48 -- common/autotest_common.sh@850 -- # return 0 00:14:47.041 15:37:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:47.041 15:37:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:47.041 15:37:48 -- common/autotest_common.sh@10 -- # set +x 00:14:47.041 15:37:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:47.041 15:37:48 -- host/multipath.sh@33 -- # nvmfapp_pid=77058 00:14:47.041 15:37:48 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:47.299 [2024-04-17 15:37:48.687472] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:47.299 15:37:48 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:14:47.557 Malloc0 00:14:47.817 15:37:49 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:14:47.817 15:37:49 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:48.084 15:37:49 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:48.346 [2024-04-17 15:37:49.662259] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:48.346 15:37:49 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:48.605 [2024-04-17 15:37:49.882359] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:48.605 15:37:49 -- host/multipath.sh@44 -- # bdevperf_pid=77108 00:14:48.605 15:37:49 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:14:48.605 15:37:49 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:48.605 15:37:49 -- host/multipath.sh@47 -- # waitforlisten 77108 /var/tmp/bdevperf.sock 00:14:48.605 15:37:49 -- common/autotest_common.sh@817 -- # '[' -z 77108 ']' 00:14:48.605 15:37:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:48.605 15:37:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:48.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:48.605 15:37:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:48.605 15:37:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:48.605 15:37:49 -- common/autotest_common.sh@10 -- # set +x 00:14:49.540 15:37:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:49.540 15:37:50 -- common/autotest_common.sh@850 -- # return 0 00:14:49.540 15:37:50 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:14:49.798 15:37:51 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:14:50.056 Nvme0n1 00:14:50.056 15:37:51 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:14:50.624 Nvme0n1 00:14:50.624 15:37:51 -- host/multipath.sh@78 -- # sleep 1 00:14:50.624 15:37:51 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:14:51.561 15:37:52 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:14:51.561 15:37:52 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:14:51.820 15:37:53 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:14:52.078 15:37:53 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:14:52.078 15:37:53 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 77058 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:14:52.078 15:37:53 -- host/multipath.sh@65 -- # dtrace_pid=77159 00:14:52.078 15:37:53 -- host/multipath.sh@66 -- # sleep 6 00:14:58.647 15:37:59 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:14:58.647 15:37:59 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:14:58.647 15:37:59 -- host/multipath.sh@67 -- # active_port=4421 00:14:58.647 15:37:59 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:14:58.647 Attaching 4 probes... 00:14:58.647 @path[10.0.0.2, 4421]: 16320 00:14:58.647 @path[10.0.0.2, 4421]: 17312 00:14:58.647 @path[10.0.0.2, 4421]: 16693 00:14:58.647 @path[10.0.0.2, 4421]: 16605 00:14:58.647 @path[10.0.0.2, 4421]: 16937 00:14:58.647 15:37:59 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:14:58.647 15:37:59 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:14:58.647 15:37:59 -- host/multipath.sh@69 -- # sed -n 1p 00:14:58.647 15:37:59 -- host/multipath.sh@69 -- # port=4421 00:14:58.647 15:37:59 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:14:58.647 15:37:59 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:14:58.647 15:37:59 -- host/multipath.sh@72 -- # kill 77159 00:14:58.647 15:37:59 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:14:58.647 15:37:59 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:14:58.647 15:37:59 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:14:58.647 15:37:59 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:14:58.906 15:38:00 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:14:58.906 15:38:00 -- host/multipath.sh@65 -- # dtrace_pid=77271 00:14:58.906 15:38:00 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 77058 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:14:58.906 15:38:00 -- host/multipath.sh@66 -- # sleep 6 00:15:05.471 15:38:06 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:15:05.471 15:38:06 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:15:05.471 15:38:06 -- host/multipath.sh@67 -- # active_port=4420 00:15:05.471 15:38:06 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:15:05.471 Attaching 4 probes... 00:15:05.471 @path[10.0.0.2, 4420]: 16861 00:15:05.471 @path[10.0.0.2, 4420]: 17039 00:15:05.471 @path[10.0.0.2, 4420]: 16696 00:15:05.471 @path[10.0.0.2, 4420]: 16601 00:15:05.471 @path[10.0.0.2, 4420]: 16763 00:15:05.471 15:38:06 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:15:05.471 15:38:06 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:15:05.471 15:38:06 -- host/multipath.sh@69 -- # sed -n 1p 00:15:05.471 15:38:06 -- host/multipath.sh@69 -- # port=4420 00:15:05.471 15:38:06 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:15:05.471 15:38:06 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:15:05.471 15:38:06 -- host/multipath.sh@72 -- # kill 77271 00:15:05.471 15:38:06 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:15:05.471 15:38:06 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:15:05.471 15:38:06 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:05.471 15:38:06 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:05.730 15:38:07 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:15:05.730 15:38:07 -- host/multipath.sh@65 -- # dtrace_pid=77389 00:15:05.730 15:38:07 -- host/multipath.sh@66 -- # sleep 6 00:15:05.730 15:38:07 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 77058 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:15:12.292 15:38:13 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:15:12.292 15:38:13 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:15:12.292 15:38:13 -- host/multipath.sh@67 -- # active_port=4421 00:15:12.292 15:38:13 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:15:12.292 Attaching 4 probes... 00:15:12.292 @path[10.0.0.2, 4421]: 13123 00:15:12.292 @path[10.0.0.2, 4421]: 16844 00:15:12.292 @path[10.0.0.2, 4421]: 16852 00:15:12.292 @path[10.0.0.2, 4421]: 16839 00:15:12.292 @path[10.0.0.2, 4421]: 16821 00:15:12.292 15:38:13 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:15:12.292 15:38:13 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:15:12.292 15:38:13 -- host/multipath.sh@69 -- # sed -n 1p 00:15:12.292 15:38:13 -- host/multipath.sh@69 -- # port=4421 00:15:12.292 15:38:13 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:15:12.292 15:38:13 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:15:12.292 15:38:13 -- host/multipath.sh@72 -- # kill 77389 00:15:12.292 15:38:13 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:15:12.292 15:38:13 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:15:12.292 15:38:13 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:12.292 15:38:13 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:15:12.551 15:38:13 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:15:12.551 15:38:13 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 77058 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:15:12.551 15:38:13 -- host/multipath.sh@65 -- # dtrace_pid=77507 00:15:12.551 15:38:13 -- host/multipath.sh@66 -- # sleep 6 00:15:19.114 15:38:19 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:15:19.114 15:38:19 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:15:19.114 15:38:20 -- host/multipath.sh@67 -- # active_port= 00:15:19.114 15:38:20 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:15:19.114 Attaching 4 probes... 00:15:19.114 00:15:19.114 00:15:19.114 00:15:19.114 00:15:19.114 00:15:19.114 15:38:20 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:15:19.114 15:38:20 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:15:19.114 15:38:20 -- host/multipath.sh@69 -- # sed -n 1p 00:15:19.114 15:38:20 -- host/multipath.sh@69 -- # port= 00:15:19.114 15:38:20 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:15:19.114 15:38:20 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:15:19.114 15:38:20 -- host/multipath.sh@72 -- # kill 77507 00:15:19.114 15:38:20 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:15:19.114 15:38:20 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:15:19.114 15:38:20 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:19.114 15:38:20 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:19.372 15:38:20 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:15:19.372 15:38:20 -- host/multipath.sh@65 -- # dtrace_pid=77614 00:15:19.372 15:38:20 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 77058 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:15:19.372 15:38:20 -- host/multipath.sh@66 -- # sleep 6 00:15:25.959 15:38:26 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:15:25.959 15:38:26 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:15:25.959 15:38:26 -- host/multipath.sh@67 -- # active_port=4421 00:15:25.959 15:38:26 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:15:25.959 Attaching 4 probes... 00:15:25.959 @path[10.0.0.2, 4421]: 16487 00:15:25.959 @path[10.0.0.2, 4421]: 16325 00:15:25.959 @path[10.0.0.2, 4421]: 16368 00:15:25.959 @path[10.0.0.2, 4421]: 16600 00:15:25.959 @path[10.0.0.2, 4421]: 16654 00:15:25.959 15:38:26 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:15:25.959 15:38:26 -- host/multipath.sh@69 -- # sed -n 1p 00:15:25.959 15:38:26 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:15:25.959 15:38:26 -- host/multipath.sh@69 -- # port=4421 00:15:25.959 15:38:26 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:15:25.959 15:38:26 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:15:25.959 15:38:26 -- host/multipath.sh@72 -- # kill 77614 00:15:25.959 15:38:26 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:15:25.959 15:38:26 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:25.959 [2024-04-17 15:38:27.198736] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363c60 is same with the state(5) to be set 00:15:25.959 [2024-04-17 15:38:27.198825] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363c60 is same with the state(5) to be set 00:15:25.959 [2024-04-17 15:38:27.198839] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363c60 is same with the state(5) to be set 00:15:25.959 [2024-04-17 15:38:27.198849] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363c60 is same with the state(5) to be set 00:15:25.959 [2024-04-17 15:38:27.198858] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363c60 is same with the state(5) to be set 00:15:25.959 [2024-04-17 15:38:27.198867] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363c60 is same with the state(5) to be set 00:15:25.959 [2024-04-17 15:38:27.198877] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363c60 is same with the state(5) to be set 00:15:25.959 [2024-04-17 15:38:27.198887] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363c60 is same with the state(5) to be set 00:15:25.959 [2024-04-17 15:38:27.198896] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363c60 is same with the state(5) to be set 00:15:25.959 [2024-04-17 15:38:27.198904] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363c60 is same with the state(5) to be set 00:15:25.959 [2024-04-17 15:38:27.198913] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363c60 is same with the state(5) to be set 00:15:25.959 [2024-04-17 15:38:27.198922] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363c60 is same with the state(5) to be set 00:15:25.959 [2024-04-17 15:38:27.198931] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363c60 is same with the state(5) to be set 00:15:25.959 [2024-04-17 15:38:27.198949] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363c60 is same with the state(5) to be set 00:15:25.959 [2024-04-17 15:38:27.198958] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363c60 is same with the state(5) to be set 00:15:25.959 [2024-04-17 15:38:27.198967] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363c60 is same with the state(5) to be set 00:15:25.959 [2024-04-17 15:38:27.198976] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363c60 is same with the state(5) to be set 00:15:25.959 [2024-04-17 15:38:27.198984] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363c60 is same with the state(5) to be set 00:15:25.959 [2024-04-17 15:38:27.198993] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363c60 is same with the state(5) to be set 00:15:25.959 [2024-04-17 15:38:27.199001] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363c60 is same with the state(5) to be set 00:15:25.959 [2024-04-17 15:38:27.199010] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363c60 is same with the state(5) to be set 00:15:25.959 [2024-04-17 15:38:27.199019] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363c60 is same with the state(5) to be set 00:15:25.959 [2024-04-17 15:38:27.199028] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363c60 is same with the state(5) to be set 00:15:25.959 [2024-04-17 15:38:27.199036] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363c60 is same with the state(5) to be set 00:15:25.959 [2024-04-17 15:38:27.199045] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363c60 is same with the state(5) to be set 00:15:25.959 [2024-04-17 15:38:27.199053] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363c60 is same with the state(5) to be set 00:15:25.959 [2024-04-17 15:38:27.199062] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363c60 is same with the state(5) to be set 00:15:25.959 [2024-04-17 15:38:27.199071] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363c60 is same with the state(5) to be set 00:15:25.959 [2024-04-17 15:38:27.199090] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363c60 is same with the state(5) to be set 00:15:25.959 [2024-04-17 15:38:27.199100] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363c60 is same with the state(5) to be set 00:15:25.959 [2024-04-17 15:38:27.199109] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363c60 is same with the state(5) to be set 00:15:25.959 [2024-04-17 15:38:27.199118] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363c60 is same with the state(5) to be set 00:15:25.959 [2024-04-17 15:38:27.199127] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363c60 is same with the state(5) to be set 00:15:25.959 15:38:27 -- host/multipath.sh@101 -- # sleep 1 00:15:26.894 15:38:28 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:15:26.894 15:38:28 -- host/multipath.sh@65 -- # dtrace_pid=77743 00:15:26.894 15:38:28 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 77058 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:15:26.894 15:38:28 -- host/multipath.sh@66 -- # sleep 6 00:15:33.454 15:38:34 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:15:33.454 15:38:34 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:15:33.454 15:38:34 -- host/multipath.sh@67 -- # active_port=4420 00:15:33.454 15:38:34 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:15:33.454 Attaching 4 probes... 00:15:33.454 @path[10.0.0.2, 4420]: 16163 00:15:33.454 @path[10.0.0.2, 4420]: 16514 00:15:33.454 @path[10.0.0.2, 4420]: 16533 00:15:33.454 @path[10.0.0.2, 4420]: 16506 00:15:33.454 @path[10.0.0.2, 4420]: 16451 00:15:33.454 15:38:34 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:15:33.454 15:38:34 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:15:33.454 15:38:34 -- host/multipath.sh@69 -- # sed -n 1p 00:15:33.454 15:38:34 -- host/multipath.sh@69 -- # port=4420 00:15:33.454 15:38:34 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:15:33.454 15:38:34 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:15:33.455 15:38:34 -- host/multipath.sh@72 -- # kill 77743 00:15:33.455 15:38:34 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:15:33.455 15:38:34 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:33.455 [2024-04-17 15:38:34.764295] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:33.455 15:38:34 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:33.713 15:38:35 -- host/multipath.sh@111 -- # sleep 6 00:15:40.275 15:38:41 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:15:40.275 15:38:41 -- host/multipath.sh@65 -- # dtrace_pid=77916 00:15:40.275 15:38:41 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 77058 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:15:40.275 15:38:41 -- host/multipath.sh@66 -- # sleep 6 00:15:46.854 15:38:47 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:15:46.854 15:38:47 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:15:46.854 15:38:47 -- host/multipath.sh@67 -- # active_port=4421 00:15:46.854 15:38:47 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:15:46.854 Attaching 4 probes... 00:15:46.854 @path[10.0.0.2, 4421]: 16165 00:15:46.854 @path[10.0.0.2, 4421]: 16347 00:15:46.854 @path[10.0.0.2, 4421]: 16403 00:15:46.854 @path[10.0.0.2, 4421]: 16529 00:15:46.854 @path[10.0.0.2, 4421]: 16283 00:15:46.854 15:38:47 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:15:46.854 15:38:47 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:15:46.854 15:38:47 -- host/multipath.sh@69 -- # sed -n 1p 00:15:46.854 15:38:47 -- host/multipath.sh@69 -- # port=4421 00:15:46.854 15:38:47 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:15:46.854 15:38:47 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:15:46.854 15:38:47 -- host/multipath.sh@72 -- # kill 77916 00:15:46.854 15:38:47 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:15:46.854 15:38:47 -- host/multipath.sh@114 -- # killprocess 77108 00:15:46.854 15:38:47 -- common/autotest_common.sh@936 -- # '[' -z 77108 ']' 00:15:46.854 15:38:47 -- common/autotest_common.sh@940 -- # kill -0 77108 00:15:46.854 15:38:47 -- common/autotest_common.sh@941 -- # uname 00:15:46.854 15:38:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:46.854 15:38:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77108 00:15:46.854 15:38:47 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:15:46.854 killing process with pid 77108 00:15:46.854 15:38:47 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:15:46.854 15:38:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77108' 00:15:46.854 15:38:47 -- common/autotest_common.sh@955 -- # kill 77108 00:15:46.854 15:38:47 -- common/autotest_common.sh@960 -- # wait 77108 00:15:46.854 Connection closed with partial response: 00:15:46.854 00:15:46.854 00:15:46.854 15:38:47 -- host/multipath.sh@116 -- # wait 77108 00:15:46.854 15:38:47 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:46.854 [2024-04-17 15:37:49.956865] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:15:46.854 [2024-04-17 15:37:49.957004] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77108 ] 00:15:46.854 [2024-04-17 15:37:50.099699] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.854 [2024-04-17 15:37:50.252837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:46.854 Running I/O for 90 seconds... 00:15:46.854 [2024-04-17 15:38:00.259937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.854 [2024-04-17 15:38:00.260032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:15:46.854 [2024-04-17 15:38:00.260104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.854 [2024-04-17 15:38:00.260128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:15:46.854 [2024-04-17 15:38:00.260151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.854 [2024-04-17 15:38:00.260166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:46.854 [2024-04-17 15:38:00.260188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.854 [2024-04-17 15:38:00.260202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:15:46.854 [2024-04-17 15:38:00.260223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.854 [2024-04-17 15:38:00.260238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:15:46.854 [2024-04-17 15:38:00.260259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.854 [2024-04-17 15:38:00.260274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:15:46.854 [2024-04-17 15:38:00.260295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:15016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.854 [2024-04-17 15:38:00.260309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:15:46.854 [2024-04-17 15:38:00.260330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.854 [2024-04-17 15:38:00.260345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:15:46.854 [2024-04-17 15:38:00.260366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:15032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.854 [2024-04-17 15:38:00.260380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:15:46.854 [2024-04-17 15:38:00.260402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.854 [2024-04-17 15:38:00.260422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:15:46.854 [2024-04-17 15:38:00.260444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.854 [2024-04-17 15:38:00.260482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:15:46.854 [2024-04-17 15:38:00.260506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.854 [2024-04-17 15:38:00.260520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:15:46.854 [2024-04-17 15:38:00.260542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.854 [2024-04-17 15:38:00.260557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:15:46.854 [2024-04-17 15:38:00.260578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.854 [2024-04-17 15:38:00.260592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:15:46.854 [2024-04-17 15:38:00.260613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.854 [2024-04-17 15:38:00.260627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:15:46.854 [2024-04-17 15:38:00.260648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.854 [2024-04-17 15:38:00.260662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:15:46.854 [2024-04-17 15:38:00.260683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.854 [2024-04-17 15:38:00.260697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:15:46.854 [2024-04-17 15:38:00.260720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.854 [2024-04-17 15:38:00.260735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:15:46.854 [2024-04-17 15:38:00.260770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.854 [2024-04-17 15:38:00.260788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:15:46.854 [2024-04-17 15:38:00.260809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.854 [2024-04-17 15:38:00.260823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:15:46.854 [2024-04-17 15:38:00.260844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.854 [2024-04-17 15:38:00.260858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:15:46.854 [2024-04-17 15:38:00.260890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.855 [2024-04-17 15:38:00.260903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:15:46.855 [2024-04-17 15:38:00.260925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:14696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.855 [2024-04-17 15:38:00.260939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:15:46.855 [2024-04-17 15:38:00.260970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.855 [2024-04-17 15:38:00.260985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:15:46.855 [2024-04-17 15:38:00.261006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.855 [2024-04-17 15:38:00.261021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:15:46.855 [2024-04-17 15:38:00.261043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.855 [2024-04-17 15:38:00.261057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:15:46.855 [2024-04-17 15:38:00.261078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.855 [2024-04-17 15:38:00.261092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:15:46.855 [2024-04-17 15:38:00.261113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.855 [2024-04-17 15:38:00.261127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:15:46.855 [2024-04-17 15:38:00.261149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.855 [2024-04-17 15:38:00.261163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:15:46.855 [2024-04-17 15:38:00.261184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.855 [2024-04-17 15:38:00.261198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:15:46.855 [2024-04-17 15:38:00.261220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.855 [2024-04-17 15:38:00.261234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:15:46.855 [2024-04-17 15:38:00.261255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.855 [2024-04-17 15:38:00.261269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:15:46.855 [2024-04-17 15:38:00.261486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.855 [2024-04-17 15:38:00.261510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:15:46.855 [2024-04-17 15:38:00.261534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.855 [2024-04-17 15:38:00.261550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:15:46.855 [2024-04-17 15:38:00.261571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.855 [2024-04-17 15:38:00.261585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:46.855 [2024-04-17 15:38:00.261615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.855 [2024-04-17 15:38:00.261631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:15:46.855 [2024-04-17 15:38:00.261653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.855 [2024-04-17 15:38:00.261667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:15:46.855 [2024-04-17 15:38:00.261688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.855 [2024-04-17 15:38:00.261702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:15:46.855 [2024-04-17 15:38:00.261723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.855 [2024-04-17 15:38:00.261737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:15:46.855 [2024-04-17 15:38:00.261772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.855 [2024-04-17 15:38:00.261790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:15:46.855 [2024-04-17 15:38:00.261812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.855 [2024-04-17 15:38:00.261827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:15:46.855 [2024-04-17 15:38:00.261849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.855 [2024-04-17 15:38:00.261863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:15:46.855 [2024-04-17 15:38:00.261884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.855 [2024-04-17 15:38:00.261899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:15:46.855 [2024-04-17 15:38:00.261920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.855 [2024-04-17 15:38:00.261934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:15:46.855 [2024-04-17 15:38:00.261955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.855 [2024-04-17 15:38:00.261978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:15:46.855 [2024-04-17 15:38:00.261999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.855 [2024-04-17 15:38:00.262013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:15:46.855 [2024-04-17 15:38:00.262035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.855 [2024-04-17 15:38:00.262049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:15:46.855 [2024-04-17 15:38:00.262079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.855 [2024-04-17 15:38:00.262108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:15:46.855 [2024-04-17 15:38:00.262130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.855 [2024-04-17 15:38:00.262145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:15:46.855 [2024-04-17 15:38:00.262167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.855 [2024-04-17 15:38:00.262181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:15:46.855 [2024-04-17 15:38:00.262203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.855 [2024-04-17 15:38:00.262217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:15:46.855 [2024-04-17 15:38:00.262238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:15184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.855 [2024-04-17 15:38:00.262253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:15:46.855 [2024-04-17 15:38:00.262273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.855 [2024-04-17 15:38:00.262287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:15:46.855 [2024-04-17 15:38:00.262309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.855 [2024-04-17 15:38:00.262323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:15:46.855 [2024-04-17 15:38:00.262344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.855 [2024-04-17 15:38:00.262359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:15:46.855 [2024-04-17 15:38:00.262380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.855 [2024-04-17 15:38:00.262394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:15:46.855 [2024-04-17 15:38:00.262416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.855 [2024-04-17 15:38:00.262430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:15:46.855 [2024-04-17 15:38:00.262452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.855 [2024-04-17 15:38:00.262466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:15:46.855 [2024-04-17 15:38:00.262488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:15240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.855 [2024-04-17 15:38:00.262504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:15:46.855 [2024-04-17 15:38:00.262525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.855 [2024-04-17 15:38:00.262545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:15:46.856 [2024-04-17 15:38:00.262567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.856 [2024-04-17 15:38:00.262582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:15:46.856 [2024-04-17 15:38:00.262602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.856 [2024-04-17 15:38:00.262617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:15:46.856 [2024-04-17 15:38:00.262638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:15272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.856 [2024-04-17 15:38:00.262652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:15:46.856 [2024-04-17 15:38:00.262673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.856 [2024-04-17 15:38:00.262687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:15:46.856 [2024-04-17 15:38:00.262727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.856 [2024-04-17 15:38:00.262747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:15:46.856 [2024-04-17 15:38:00.262785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.856 [2024-04-17 15:38:00.262801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.856 [2024-04-17 15:38:00.262822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.856 [2024-04-17 15:38:00.262837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:46.856 [2024-04-17 15:38:00.262858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.856 [2024-04-17 15:38:00.262872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:15:46.856 [2024-04-17 15:38:00.262893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.856 [2024-04-17 15:38:00.262907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:15:46.856 [2024-04-17 15:38:00.262928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.856 [2024-04-17 15:38:00.262942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:15:46.856 [2024-04-17 15:38:00.262963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.856 [2024-04-17 15:38:00.262977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:15:46.856 [2024-04-17 15:38:00.262999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.856 [2024-04-17 15:38:00.263013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:15:46.856 [2024-04-17 15:38:00.263082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:14840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.856 [2024-04-17 15:38:00.263100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:15:46.856 [2024-04-17 15:38:00.263123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.856 [2024-04-17 15:38:00.263137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:15:46.856 [2024-04-17 15:38:00.263158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.856 [2024-04-17 15:38:00.263173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:15:46.856 [2024-04-17 15:38:00.263194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.856 [2024-04-17 15:38:00.263208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:15:46.856 [2024-04-17 15:38:00.263229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.856 [2024-04-17 15:38:00.263243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:15:46.856 [2024-04-17 15:38:00.263264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.856 [2024-04-17 15:38:00.263278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:15:46.856 [2024-04-17 15:38:00.263299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.856 [2024-04-17 15:38:00.263313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:15:46.856 [2024-04-17 15:38:00.263334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.856 [2024-04-17 15:38:00.263348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:15:46.856 [2024-04-17 15:38:00.263369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.856 [2024-04-17 15:38:00.263383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:15:46.856 [2024-04-17 15:38:00.263405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.856 [2024-04-17 15:38:00.263419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:15:46.856 [2024-04-17 15:38:00.263449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.856 [2024-04-17 15:38:00.263463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:15:46.856 [2024-04-17 15:38:00.263485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:15376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.856 [2024-04-17 15:38:00.263499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:15:46.856 [2024-04-17 15:38:00.263527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:15384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.856 [2024-04-17 15:38:00.263542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:15:46.856 [2024-04-17 15:38:00.263563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.856 [2024-04-17 15:38:00.263578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:15:46.856 [2024-04-17 15:38:00.263599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.856 [2024-04-17 15:38:00.263613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:15:46.856 [2024-04-17 15:38:00.263634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.856 [2024-04-17 15:38:00.263648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:15:46.856 [2024-04-17 15:38:00.263669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.856 [2024-04-17 15:38:00.263683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:15:46.856 [2024-04-17 15:38:00.263705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.856 [2024-04-17 15:38:00.263720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:15:46.856 [2024-04-17 15:38:00.263742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.856 [2024-04-17 15:38:00.263769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:15:46.856 [2024-04-17 15:38:00.263793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.856 [2024-04-17 15:38:00.263807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:15:46.856 [2024-04-17 15:38:00.263828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.856 [2024-04-17 15:38:00.263842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:15:46.856 [2024-04-17 15:38:00.263863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.856 [2024-04-17 15:38:00.263877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:15:46.856 [2024-04-17 15:38:00.263898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.856 [2024-04-17 15:38:00.263912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:15:46.856 [2024-04-17 15:38:00.263933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.856 [2024-04-17 15:38:00.263947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:15:46.856 [2024-04-17 15:38:00.263968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.856 [2024-04-17 15:38:00.263990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:15:46.856 [2024-04-17 15:38:00.264012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:14912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.856 [2024-04-17 15:38:00.264027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:15:46.856 [2024-04-17 15:38:00.264048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.856 [2024-04-17 15:38:00.264062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:46.857 [2024-04-17 15:38:00.264083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:14928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.857 [2024-04-17 15:38:00.264097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:46.857 [2024-04-17 15:38:00.264118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.857 [2024-04-17 15:38:00.264133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:46.857 [2024-04-17 15:38:00.264154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.857 [2024-04-17 15:38:00.264168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:46.857 [2024-04-17 15:38:00.264189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.857 [2024-04-17 15:38:00.264203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:46.857 [2024-04-17 15:38:00.265619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.857 [2024-04-17 15:38:00.265649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:46.857 [2024-04-17 15:38:00.265688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:15480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.857 [2024-04-17 15:38:00.265704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:46.857 [2024-04-17 15:38:00.265726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:15488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.857 [2024-04-17 15:38:00.265740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:46.857 [2024-04-17 15:38:00.265777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.857 [2024-04-17 15:38:00.265794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:15:46.857 [2024-04-17 15:38:00.265815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:15504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.857 [2024-04-17 15:38:00.265829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:15:46.857 [2024-04-17 15:38:00.265850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:15512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.857 [2024-04-17 15:38:00.265874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:15:46.857 [2024-04-17 15:38:00.265898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:15520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.857 [2024-04-17 15:38:00.265912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:46.857 [2024-04-17 15:38:00.265934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.857 [2024-04-17 15:38:00.265948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:46.857 [2024-04-17 15:38:00.266107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.857 [2024-04-17 15:38:00.266131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:46.857 [2024-04-17 15:38:00.266157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.857 [2024-04-17 15:38:00.266173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:46.857 [2024-04-17 15:38:00.266194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.857 [2024-04-17 15:38:00.266209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:15:46.857 [2024-04-17 15:38:00.266230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.857 [2024-04-17 15:38:00.266244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:15:46.857 [2024-04-17 15:38:00.266265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.857 [2024-04-17 15:38:00.266279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:15:46.857 [2024-04-17 15:38:00.266300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.857 [2024-04-17 15:38:00.266314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:15:46.857 [2024-04-17 15:38:00.266336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.857 [2024-04-17 15:38:00.266350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:15:46.857 [2024-04-17 15:38:00.266371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.857 [2024-04-17 15:38:00.266385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:15:46.857 [2024-04-17 15:38:00.266410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.857 [2024-04-17 15:38:00.266425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:15:46.857 [2024-04-17 15:38:00.266453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.857 [2024-04-17 15:38:00.266468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:15:46.857 [2024-04-17 15:38:00.266499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:15616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.857 [2024-04-17 15:38:00.266515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:15:46.857 [2024-04-17 15:38:00.266536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:15624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.857 [2024-04-17 15:38:00.266550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:15:46.857 [2024-04-17 15:38:00.266571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.857 [2024-04-17 15:38:00.266585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:15:46.857 [2024-04-17 15:38:06.839159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.857 [2024-04-17 15:38:06.839237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:15:46.857 [2024-04-17 15:38:06.839302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.857 [2024-04-17 15:38:06.839321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:15:46.857 [2024-04-17 15:38:06.839344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.857 [2024-04-17 15:38:06.839359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:15:46.857 [2024-04-17 15:38:06.839380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.857 [2024-04-17 15:38:06.839393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:15:46.857 [2024-04-17 15:38:06.839414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.857 [2024-04-17 15:38:06.839428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:46.857 [2024-04-17 15:38:06.839449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.857 [2024-04-17 15:38:06.839464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:15:46.857 [2024-04-17 15:38:06.839484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.857 [2024-04-17 15:38:06.839498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:15:46.857 [2024-04-17 15:38:06.839519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.857 [2024-04-17 15:38:06.839533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:15:46.857 [2024-04-17 15:38:06.839554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.857 [2024-04-17 15:38:06.839569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:15:46.857 [2024-04-17 15:38:06.839617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.857 [2024-04-17 15:38:06.839641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:15:46.857 [2024-04-17 15:38:06.839662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.857 [2024-04-17 15:38:06.839676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:15:46.857 [2024-04-17 15:38:06.839697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.857 [2024-04-17 15:38:06.839711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:15:46.858 [2024-04-17 15:38:06.839732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.858 [2024-04-17 15:38:06.839746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:15:46.858 [2024-04-17 15:38:06.839783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.858 [2024-04-17 15:38:06.839798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:15:46.858 [2024-04-17 15:38:06.839819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.858 [2024-04-17 15:38:06.839833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:15:46.858 [2024-04-17 15:38:06.839854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.858 [2024-04-17 15:38:06.839868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:15:46.858 [2024-04-17 15:38:06.839889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.858 [2024-04-17 15:38:06.839903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:15:46.858 [2024-04-17 15:38:06.839925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.858 [2024-04-17 15:38:06.839939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:15:46.858 [2024-04-17 15:38:06.839960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.858 [2024-04-17 15:38:06.839974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:15:46.858 [2024-04-17 15:38:06.839994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.858 [2024-04-17 15:38:06.840008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:15:46.858 [2024-04-17 15:38:06.840028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.858 [2024-04-17 15:38:06.840042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:15:46.858 [2024-04-17 15:38:06.840063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.858 [2024-04-17 15:38:06.840086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:15:46.858 [2024-04-17 15:38:06.840108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.858 [2024-04-17 15:38:06.840123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:15:46.858 [2024-04-17 15:38:06.840144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.858 [2024-04-17 15:38:06.840158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:15:46.858 [2024-04-17 15:38:06.840179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:64832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.858 [2024-04-17 15:38:06.840196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:15:46.858 [2024-04-17 15:38:06.840217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:64840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.858 [2024-04-17 15:38:06.840231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:15:46.858 [2024-04-17 15:38:06.840252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:64848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.858 [2024-04-17 15:38:06.840266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:15:46.858 [2024-04-17 15:38:06.840287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.858 [2024-04-17 15:38:06.840301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:15:46.858 [2024-04-17 15:38:06.840325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:64864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.858 [2024-04-17 15:38:06.840339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:15:46.858 [2024-04-17 15:38:06.840359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:64872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.858 [2024-04-17 15:38:06.840373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:15:46.858 [2024-04-17 15:38:06.840395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:64880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.858 [2024-04-17 15:38:06.840409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:15:46.858 [2024-04-17 15:38:06.840431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.858 [2024-04-17 15:38:06.840444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:15:46.858 [2024-04-17 15:38:06.840466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:64896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.858 [2024-04-17 15:38:06.840480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:15:46.858 [2024-04-17 15:38:06.840501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:64904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.858 [2024-04-17 15:38:06.840522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:15:46.858 [2024-04-17 15:38:06.840544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.858 [2024-04-17 15:38:06.840558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:15:46.858 [2024-04-17 15:38:06.840579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:64920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.858 [2024-04-17 15:38:06.840593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:15:46.858 [2024-04-17 15:38:06.840614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.858 [2024-04-17 15:38:06.840628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:46.858 [2024-04-17 15:38:06.840651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:64936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.858 [2024-04-17 15:38:06.840665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:15:46.858 [2024-04-17 15:38:06.840686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.858 [2024-04-17 15:38:06.840700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:15:46.858 [2024-04-17 15:38:06.840721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:64952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.858 [2024-04-17 15:38:06.840736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:15:46.858 [2024-04-17 15:38:06.840789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.858 [2024-04-17 15:38:06.840815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:15:46.858 [2024-04-17 15:38:06.840838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.858 [2024-04-17 15:38:06.840853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:15:46.858 [2024-04-17 15:38:06.840874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.859 [2024-04-17 15:38:06.840888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:15:46.859 [2024-04-17 15:38:06.840909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.859 [2024-04-17 15:38:06.840923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:15:46.859 [2024-04-17 15:38:06.840944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.859 [2024-04-17 15:38:06.840957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:15:46.859 [2024-04-17 15:38:06.840978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.859 [2024-04-17 15:38:06.840992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:15:46.859 [2024-04-17 15:38:06.841023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.859 [2024-04-17 15:38:06.841039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:15:46.859 [2024-04-17 15:38:06.841060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.859 [2024-04-17 15:38:06.841080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:15:46.859 [2024-04-17 15:38:06.841101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.859 [2024-04-17 15:38:06.841117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:15:46.859 [2024-04-17 15:38:06.841138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.859 [2024-04-17 15:38:06.841152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:15:46.859 [2024-04-17 15:38:06.841173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.859 [2024-04-17 15:38:06.841187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:15:46.859 [2024-04-17 15:38:06.841207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:64984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.859 [2024-04-17 15:38:06.841222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:15:46.859 [2024-04-17 15:38:06.841243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:64992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.859 [2024-04-17 15:38:06.841257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:15:46.859 [2024-04-17 15:38:06.841278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:65000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.859 [2024-04-17 15:38:06.841292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:15:46.859 [2024-04-17 15:38:06.841313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:65008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.859 [2024-04-17 15:38:06.841327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:15:46.859 [2024-04-17 15:38:06.841348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.859 [2024-04-17 15:38:06.841363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:15:46.859 [2024-04-17 15:38:06.841384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.859 [2024-04-17 15:38:06.841398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:15:46.859 [2024-04-17 15:38:06.841419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.859 [2024-04-17 15:38:06.841433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:15:46.859 [2024-04-17 15:38:06.841476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.859 [2024-04-17 15:38:06.841491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:15:46.859 [2024-04-17 15:38:06.841511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.859 [2024-04-17 15:38:06.841526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:15:46.859 [2024-04-17 15:38:06.841546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.859 [2024-04-17 15:38:06.841560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:15:46.859 [2024-04-17 15:38:06.841581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.859 [2024-04-17 15:38:06.841595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:15:46.859 [2024-04-17 15:38:06.841615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.859 [2024-04-17 15:38:06.841629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:15:46.859 [2024-04-17 15:38:06.841651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.859 [2024-04-17 15:38:06.841664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:15:46.859 [2024-04-17 15:38:06.841685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.859 [2024-04-17 15:38:06.841699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:15:46.859 [2024-04-17 15:38:06.841720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.859 [2024-04-17 15:38:06.841734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:15:46.859 [2024-04-17 15:38:06.841769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.859 [2024-04-17 15:38:06.841790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:15:46.859 [2024-04-17 15:38:06.841812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.859 [2024-04-17 15:38:06.841827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.859 [2024-04-17 15:38:06.841848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.859 [2024-04-17 15:38:06.841863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:46.859 [2024-04-17 15:38:06.841884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.859 [2024-04-17 15:38:06.841898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:15:46.859 [2024-04-17 15:38:06.841919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.859 [2024-04-17 15:38:06.841969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:15:46.859 [2024-04-17 15:38:06.841993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.859 [2024-04-17 15:38:06.842008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:15:46.859 [2024-04-17 15:38:06.842030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.859 [2024-04-17 15:38:06.842044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:15:46.859 [2024-04-17 15:38:06.842065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.859 [2024-04-17 15:38:06.842079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:15:46.859 [2024-04-17 15:38:06.842100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.859 [2024-04-17 15:38:06.842115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:15:46.859 [2024-04-17 15:38:06.842135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.859 [2024-04-17 15:38:06.842150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:15:46.859 [2024-04-17 15:38:06.842171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.859 [2024-04-17 15:38:06.842185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:15:46.859 [2024-04-17 15:38:06.842206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.859 [2024-04-17 15:38:06.842220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:15:46.859 [2024-04-17 15:38:06.842241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.859 [2024-04-17 15:38:06.842266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:15:46.859 [2024-04-17 15:38:06.842288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.859 [2024-04-17 15:38:06.842302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:15:46.859 [2024-04-17 15:38:06.842323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:65024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.859 [2024-04-17 15:38:06.842337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:15:46.859 [2024-04-17 15:38:06.842358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:65032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.860 [2024-04-17 15:38:06.842372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:15:46.860 [2024-04-17 15:38:06.842394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:65040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.860 [2024-04-17 15:38:06.842414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:15:46.860 [2024-04-17 15:38:06.842436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:65048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.860 [2024-04-17 15:38:06.842450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:15:46.860 [2024-04-17 15:38:06.842471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:65056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.860 [2024-04-17 15:38:06.842485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:15:46.860 [2024-04-17 15:38:06.842506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:65064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.860 [2024-04-17 15:38:06.842520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:15:46.860 [2024-04-17 15:38:06.842541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:65072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.860 [2024-04-17 15:38:06.842555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:15:46.860 [2024-04-17 15:38:06.842576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:65080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.860 [2024-04-17 15:38:06.842590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:15:46.860 [2024-04-17 15:38:06.842620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.860 [2024-04-17 15:38:06.842635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:15:46.860 [2024-04-17 15:38:06.842656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.860 [2024-04-17 15:38:06.842670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:15:46.860 [2024-04-17 15:38:06.842691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.860 [2024-04-17 15:38:06.842705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:15:46.860 [2024-04-17 15:38:06.842726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.860 [2024-04-17 15:38:06.842740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:15:46.860 [2024-04-17 15:38:06.842774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.860 [2024-04-17 15:38:06.842790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:15:46.860 [2024-04-17 15:38:06.842817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.860 [2024-04-17 15:38:06.842831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:15:46.860 [2024-04-17 15:38:06.842853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.860 [2024-04-17 15:38:06.842872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:15:46.860 [2024-04-17 15:38:06.842913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.860 [2024-04-17 15:38:06.842928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:15:46.860 [2024-04-17 15:38:06.842949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.860 [2024-04-17 15:38:06.842963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:15:46.860 [2024-04-17 15:38:06.842984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.860 [2024-04-17 15:38:06.842998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:15:46.860 [2024-04-17 15:38:06.843019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.860 [2024-04-17 15:38:06.843033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:15:46.860 [2024-04-17 15:38:06.843054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.860 [2024-04-17 15:38:06.843093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:15:46.860 [2024-04-17 15:38:06.843115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.860 [2024-04-17 15:38:06.843129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:46.860 [2024-04-17 15:38:06.843150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.860 [2024-04-17 15:38:06.843173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:46.860 [2024-04-17 15:38:06.843194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.860 [2024-04-17 15:38:06.843208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:46.860 [2024-04-17 15:38:06.843229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.860 [2024-04-17 15:38:06.843243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:46.860 [2024-04-17 15:38:06.843264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.860 [2024-04-17 15:38:06.843278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:46.860 [2024-04-17 15:38:06.843298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:65096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.860 [2024-04-17 15:38:06.843312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:46.860 [2024-04-17 15:38:06.843333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:65104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.860 [2024-04-17 15:38:06.843347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:46.860 [2024-04-17 15:38:06.843378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:65112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.860 [2024-04-17 15:38:06.843393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:46.860 [2024-04-17 15:38:06.843414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:65120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.860 [2024-04-17 15:38:06.843443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:15:46.860 [2024-04-17 15:38:06.843465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:65128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.860 [2024-04-17 15:38:06.843479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:15:46.860 [2024-04-17 15:38:06.843501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:65136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.860 [2024-04-17 15:38:06.843520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:15:46.860 [2024-04-17 15:38:06.844211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:65144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.860 [2024-04-17 15:38:06.844238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:46.860 [2024-04-17 15:38:06.844272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.860 [2024-04-17 15:38:06.844288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:46.860 [2024-04-17 15:38:06.844316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.860 [2024-04-17 15:38:06.844330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:46.860 [2024-04-17 15:38:06.844358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.860 [2024-04-17 15:38:06.844373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:46.860 [2024-04-17 15:38:06.844401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.860 [2024-04-17 15:38:06.844415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:15:46.860 [2024-04-17 15:38:06.844452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.860 [2024-04-17 15:38:06.844466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:15:46.860 [2024-04-17 15:38:06.844494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.860 [2024-04-17 15:38:06.844508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:15:46.860 [2024-04-17 15:38:06.844536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.860 [2024-04-17 15:38:06.844550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:15:46.860 [2024-04-17 15:38:06.844592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.860 [2024-04-17 15:38:06.844621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:15:46.860 [2024-04-17 15:38:06.844651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.861 [2024-04-17 15:38:06.844666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:15:46.861 [2024-04-17 15:38:06.844694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.861 [2024-04-17 15:38:06.844708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:15:46.861 [2024-04-17 15:38:06.844736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.861 [2024-04-17 15:38:06.844764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:15:46.861 [2024-04-17 15:38:06.844796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.861 [2024-04-17 15:38:06.844811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:15:46.861 [2024-04-17 15:38:06.844839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.861 [2024-04-17 15:38:06.844859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:15:46.861 [2024-04-17 15:38:06.844888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.861 [2024-04-17 15:38:06.844902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:15:46.861 [2024-04-17 15:38:06.844930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.861 [2024-04-17 15:38:06.844950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:15:46.861 [2024-04-17 15:38:06.844982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.861 [2024-04-17 15:38:06.844998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:15:46.861 [2024-04-17 15:38:13.847834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:75648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.861 [2024-04-17 15:38:13.847920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:15:46.861 [2024-04-17 15:38:13.847985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:75656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.861 [2024-04-17 15:38:13.848002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:15:46.861 [2024-04-17 15:38:13.848024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:75664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.861 [2024-04-17 15:38:13.848038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:15:46.861 [2024-04-17 15:38:13.848058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:75672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.861 [2024-04-17 15:38:13.848105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:15:46.861 [2024-04-17 15:38:13.848127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:75680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.861 [2024-04-17 15:38:13.848141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:15:46.861 [2024-04-17 15:38:13.848162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:75688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.861 [2024-04-17 15:38:13.848176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:15:46.861 [2024-04-17 15:38:13.848196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:75696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.861 [2024-04-17 15:38:13.848227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:15:46.861 [2024-04-17 15:38:13.848248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:75704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.861 [2024-04-17 15:38:13.848262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:15:46.861 [2024-04-17 15:38:13.848283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:75200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.861 [2024-04-17 15:38:13.848297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:15:46.861 [2024-04-17 15:38:13.848319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:75208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.861 [2024-04-17 15:38:13.848333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:15:46.861 [2024-04-17 15:38:13.848354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:75216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.861 [2024-04-17 15:38:13.848368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:15:46.861 [2024-04-17 15:38:13.848390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.861 [2024-04-17 15:38:13.848404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:15:46.861 [2024-04-17 15:38:13.848427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.861 [2024-04-17 15:38:13.848441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:15:46.861 [2024-04-17 15:38:13.848466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.861 [2024-04-17 15:38:13.848480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:15:46.861 [2024-04-17 15:38:13.848501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:75248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.861 [2024-04-17 15:38:13.848516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:15:46.861 [2024-04-17 15:38:13.848537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.861 [2024-04-17 15:38:13.848551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:15:46.861 [2024-04-17 15:38:13.848581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:75264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.861 [2024-04-17 15:38:13.848597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:15:46.861 [2024-04-17 15:38:13.848634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:75272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.861 [2024-04-17 15:38:13.848650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:15:46.861 [2024-04-17 15:38:13.848670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:75280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.861 [2024-04-17 15:38:13.848684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:15:46.861 [2024-04-17 15:38:13.848705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:75288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.861 [2024-04-17 15:38:13.848719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:15:46.861 [2024-04-17 15:38:13.848739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:75296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.861 [2024-04-17 15:38:13.848770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:15:46.861 [2024-04-17 15:38:13.848791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:75304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.861 [2024-04-17 15:38:13.848806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.861 [2024-04-17 15:38:13.848840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:75312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.861 [2024-04-17 15:38:13.848858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:46.861 [2024-04-17 15:38:13.848880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:75320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.861 [2024-04-17 15:38:13.848896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:15:46.861 [2024-04-17 15:38:13.848922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.861 [2024-04-17 15:38:13.848938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:15:46.861 [2024-04-17 15:38:13.848960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.861 [2024-04-17 15:38:13.848974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:15:46.861 [2024-04-17 15:38:13.849001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:75728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.861 [2024-04-17 15:38:13.849015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:15:46.861 [2024-04-17 15:38:13.849036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:75736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.861 [2024-04-17 15:38:13.849051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:15:46.861 [2024-04-17 15:38:13.849081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:75744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.861 [2024-04-17 15:38:13.849097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:15:46.861 [2024-04-17 15:38:13.849118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.861 [2024-04-17 15:38:13.849133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:15:46.861 [2024-04-17 15:38:13.849154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:75760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.861 [2024-04-17 15:38:13.849169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:15:46.862 [2024-04-17 15:38:13.849190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:75768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.862 [2024-04-17 15:38:13.849204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:15:46.862 [2024-04-17 15:38:13.849226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:75776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.862 [2024-04-17 15:38:13.849240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:15:46.862 [2024-04-17 15:38:13.849261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:75784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.862 [2024-04-17 15:38:13.849276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:15:46.862 [2024-04-17 15:38:13.849298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:75792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.862 [2024-04-17 15:38:13.849312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:15:46.862 [2024-04-17 15:38:13.849334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:75800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.862 [2024-04-17 15:38:13.849348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:15:46.862 [2024-04-17 15:38:13.849369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:75808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.862 [2024-04-17 15:38:13.849383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:15:46.862 [2024-04-17 15:38:13.849404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:75816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.862 [2024-04-17 15:38:13.849419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:15:46.862 [2024-04-17 15:38:13.849440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:75824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.862 [2024-04-17 15:38:13.849455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:15:46.862 [2024-04-17 15:38:13.849476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:75832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.862 [2024-04-17 15:38:13.849491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:15:46.862 [2024-04-17 15:38:13.849530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:75840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.862 [2024-04-17 15:38:13.849545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:15:46.862 [2024-04-17 15:38:13.849566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:75848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.862 [2024-04-17 15:38:13.849581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:15:46.862 [2024-04-17 15:38:13.849602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.862 [2024-04-17 15:38:13.849616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:15:46.862 [2024-04-17 15:38:13.849637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:75864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.862 [2024-04-17 15:38:13.849652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:15:46.862 [2024-04-17 15:38:13.849673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:75872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.862 [2024-04-17 15:38:13.849688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:15:46.862 [2024-04-17 15:38:13.849709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:75880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.862 [2024-04-17 15:38:13.849723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:15:46.862 [2024-04-17 15:38:13.849745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:75888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.862 [2024-04-17 15:38:13.849772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:15:46.862 [2024-04-17 15:38:13.849795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:75896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.862 [2024-04-17 15:38:13.849810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:15:46.862 [2024-04-17 15:38:13.849832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:75328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.862 [2024-04-17 15:38:13.849846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:15:46.862 [2024-04-17 15:38:13.849867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:75336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.862 [2024-04-17 15:38:13.849882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:15:46.862 [2024-04-17 15:38:13.849903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:75344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.862 [2024-04-17 15:38:13.849917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:15:46.862 [2024-04-17 15:38:13.849940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:75352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.862 [2024-04-17 15:38:13.849955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:15:46.862 [2024-04-17 15:38:13.849976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:75360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.862 [2024-04-17 15:38:13.849998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:15:46.862 [2024-04-17 15:38:13.850020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:75368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.862 [2024-04-17 15:38:13.850035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:15:46.862 [2024-04-17 15:38:13.850056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:75376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.862 [2024-04-17 15:38:13.850071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:46.862 [2024-04-17 15:38:13.850092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:75384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.862 [2024-04-17 15:38:13.850107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:46.862 [2024-04-17 15:38:13.850133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.862 [2024-04-17 15:38:13.850148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:46.862 [2024-04-17 15:38:13.850170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.862 [2024-04-17 15:38:13.850184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:46.862 [2024-04-17 15:38:13.850206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.862 [2024-04-17 15:38:13.850220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:46.862 [2024-04-17 15:38:13.850241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.862 [2024-04-17 15:38:13.850255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:46.862 [2024-04-17 15:38:13.850276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.862 [2024-04-17 15:38:13.850291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:46.862 [2024-04-17 15:38:13.850312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.862 [2024-04-17 15:38:13.850326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:46.862 [2024-04-17 15:38:13.850348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.862 [2024-04-17 15:38:13.850362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:15:46.862 [2024-04-17 15:38:13.850384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.862 [2024-04-17 15:38:13.850398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:15:46.862 [2024-04-17 15:38:13.850419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.862 [2024-04-17 15:38:13.850440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:15:46.862 [2024-04-17 15:38:13.850463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.862 [2024-04-17 15:38:13.850478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:46.862 [2024-04-17 15:38:13.850499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:75984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.862 [2024-04-17 15:38:13.850514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:46.862 [2024-04-17 15:38:13.850536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.862 [2024-04-17 15:38:13.850550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:46.863 [2024-04-17 15:38:13.850571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.863 [2024-04-17 15:38:13.850586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:46.863 [2024-04-17 15:38:13.850608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.863 [2024-04-17 15:38:13.850622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:15:46.863 [2024-04-17 15:38:13.850643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.863 [2024-04-17 15:38:13.850658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:15:46.863 [2024-04-17 15:38:13.850679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.863 [2024-04-17 15:38:13.850695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:15:46.863 [2024-04-17 15:38:13.850716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:75392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.863 [2024-04-17 15:38:13.850731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:15:46.863 [2024-04-17 15:38:13.850764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:75400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.863 [2024-04-17 15:38:13.850782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:15:46.863 [2024-04-17 15:38:13.850805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:75408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.863 [2024-04-17 15:38:13.850819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:15:46.863 [2024-04-17 15:38:13.850840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:75416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.863 [2024-04-17 15:38:13.850855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:15:46.863 [2024-04-17 15:38:13.850876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:75424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.863 [2024-04-17 15:38:13.850890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:15:46.863 [2024-04-17 15:38:13.850919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:75432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.863 [2024-04-17 15:38:13.850934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:15:46.863 [2024-04-17 15:38:13.850956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:75440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.863 [2024-04-17 15:38:13.850971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:15:46.863 [2024-04-17 15:38:13.850993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:75448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.863 [2024-04-17 15:38:13.851008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:15:46.863 [2024-04-17 15:38:13.851033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.863 [2024-04-17 15:38:13.851048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:15:46.863 [2024-04-17 15:38:13.851079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.863 [2024-04-17 15:38:13.851096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:15:46.863 [2024-04-17 15:38:13.851118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.863 [2024-04-17 15:38:13.851133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:15:46.863 [2024-04-17 15:38:13.851154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.863 [2024-04-17 15:38:13.851168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:15:46.863 [2024-04-17 15:38:13.851190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.863 [2024-04-17 15:38:13.851204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:15:46.863 [2024-04-17 15:38:13.851226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.863 [2024-04-17 15:38:13.851240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:15:46.863 [2024-04-17 15:38:13.851272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.863 [2024-04-17 15:38:13.851286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:46.863 [2024-04-17 15:38:13.851308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.863 [2024-04-17 15:38:13.851323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:15:46.863 [2024-04-17 15:38:13.851344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.863 [2024-04-17 15:38:13.851358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:15:46.863 [2024-04-17 15:38:13.851387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.863 [2024-04-17 15:38:13.851402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:15:46.863 [2024-04-17 15:38:13.851424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.863 [2024-04-17 15:38:13.851438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:15:46.863 [2024-04-17 15:38:13.851459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.863 [2024-04-17 15:38:13.851473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:15:46.863 [2024-04-17 15:38:13.851495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.863 [2024-04-17 15:38:13.851509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:15:46.863 [2024-04-17 15:38:13.851530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.863 [2024-04-17 15:38:13.851544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:15:46.863 [2024-04-17 15:38:13.851565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.863 [2024-04-17 15:38:13.851590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:15:46.863 [2024-04-17 15:38:13.851613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.863 [2024-04-17 15:38:13.851628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:15:46.863 [2024-04-17 15:38:13.851649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:75456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.863 [2024-04-17 15:38:13.851664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:15:46.863 [2024-04-17 15:38:13.851685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:75464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.863 [2024-04-17 15:38:13.851699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:15:46.863 [2024-04-17 15:38:13.851720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.863 [2024-04-17 15:38:13.851734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:15:46.864 [2024-04-17 15:38:13.851768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:75480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.864 [2024-04-17 15:38:13.851785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:15:46.864 [2024-04-17 15:38:13.851807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:75488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.864 [2024-04-17 15:38:13.851821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:15:46.864 [2024-04-17 15:38:13.851843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:75496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.864 [2024-04-17 15:38:13.851865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:15:46.864 [2024-04-17 15:38:13.851887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.864 [2024-04-17 15:38:13.851902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:15:46.864 [2024-04-17 15:38:13.851923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:75512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.864 [2024-04-17 15:38:13.851937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:15:46.864 [2024-04-17 15:38:13.851959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:75520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.864 [2024-04-17 15:38:13.851973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:15:46.864 [2024-04-17 15:38:13.851993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:75528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.864 [2024-04-17 15:38:13.852008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:15:46.864 [2024-04-17 15:38:13.852029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.864 [2024-04-17 15:38:13.852043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:15:46.864 [2024-04-17 15:38:13.852065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:75544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.864 [2024-04-17 15:38:13.852079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:15:46.864 [2024-04-17 15:38:13.852100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:75552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.864 [2024-04-17 15:38:13.852114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:15:46.864 [2024-04-17 15:38:13.852136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:75560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.864 [2024-04-17 15:38:13.852150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:15:46.864 [2024-04-17 15:38:13.852171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:75568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.864 [2024-04-17 15:38:13.852192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:15:46.864 [2024-04-17 15:38:13.852215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:75576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.864 [2024-04-17 15:38:13.852229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:15:46.864 [2024-04-17 15:38:13.852251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:75584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.864 [2024-04-17 15:38:13.852265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:15:46.864 [2024-04-17 15:38:13.852286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:75592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.864 [2024-04-17 15:38:13.852306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:15:46.864 [2024-04-17 15:38:13.852329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:75600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.864 [2024-04-17 15:38:13.852344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:15:46.864 [2024-04-17 15:38:13.853675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:75608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.864 [2024-04-17 15:38:13.853704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:15:46.864 [2024-04-17 15:38:13.853732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:75616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.864 [2024-04-17 15:38:13.853748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:15:46.864 [2024-04-17 15:38:13.853785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:75624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.864 [2024-04-17 15:38:13.853800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:15:46.864 [2024-04-17 15:38:13.853821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:75632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.864 [2024-04-17 15:38:13.853836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:46.864 [2024-04-17 15:38:13.853857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:75640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.864 [2024-04-17 15:38:13.853871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:15:46.864 [2024-04-17 15:38:13.853892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.864 [2024-04-17 15:38:13.853907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:15:46.864 [2024-04-17 15:38:13.853928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.864 [2024-04-17 15:38:13.853942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:15:46.864 [2024-04-17 15:38:13.853963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.864 [2024-04-17 15:38:13.853978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:15:46.864 [2024-04-17 15:38:13.853999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.864 [2024-04-17 15:38:13.854013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:15:46.864 [2024-04-17 15:38:13.854035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.864 [2024-04-17 15:38:13.854049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:15:46.864 [2024-04-17 15:38:13.854070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.864 [2024-04-17 15:38:13.854084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:15:46.864 [2024-04-17 15:38:13.854117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.864 [2024-04-17 15:38:13.854139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:15:46.864 [2024-04-17 15:38:13.854176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.864 [2024-04-17 15:38:13.854196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:15:46.864 [2024-04-17 15:38:13.854218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:75648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.864 [2024-04-17 15:38:13.854233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:15:46.864 [2024-04-17 15:38:13.854254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:75656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.864 [2024-04-17 15:38:13.854268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:15:46.864 [2024-04-17 15:38:13.854289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:75664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.864 [2024-04-17 15:38:13.854303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:15:46.864 [2024-04-17 15:38:13.854324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.864 [2024-04-17 15:38:13.854338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:15:46.864 [2024-04-17 15:38:13.854359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:75680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.864 [2024-04-17 15:38:13.854373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:15:46.864 [2024-04-17 15:38:13.854394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:75688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.864 [2024-04-17 15:38:13.854408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:15:46.864 [2024-04-17 15:38:13.854429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:75696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.864 [2024-04-17 15:38:13.854443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:15:46.864 [2024-04-17 15:38:13.854826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:75704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.864 [2024-04-17 15:38:13.854852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:15:46.864 [2024-04-17 15:38:13.854878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:75200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.864 [2024-04-17 15:38:13.854894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:15:46.864 [2024-04-17 15:38:13.854915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:75208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.864 [2024-04-17 15:38:13.854930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:15:46.865 [2024-04-17 15:38:13.854962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:75216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.865 [2024-04-17 15:38:13.854978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:15:46.865 [2024-04-17 15:38:13.855000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.865 [2024-04-17 15:38:13.855014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:15:46.865 [2024-04-17 15:38:13.855035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:75232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.865 [2024-04-17 15:38:13.855050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:15:46.865 [2024-04-17 15:38:13.855082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:75240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.865 [2024-04-17 15:38:13.855098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:15:46.865 [2024-04-17 15:38:13.855120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:75248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.865 [2024-04-17 15:38:13.855140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:15:46.865 [2024-04-17 15:38:13.855162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:75256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.865 [2024-04-17 15:38:13.855176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:15:46.865 [2024-04-17 15:38:13.855197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:75264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.865 [2024-04-17 15:38:13.855212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:15:46.865 [2024-04-17 15:38:13.855233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:75272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.865 [2024-04-17 15:38:13.855247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:15:46.865 [2024-04-17 15:38:13.855277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:75280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.865 [2024-04-17 15:38:13.855291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:15:46.865 [2024-04-17 15:38:13.855312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:75288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.865 [2024-04-17 15:38:13.855327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:15:46.865 [2024-04-17 15:38:13.855347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:75296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.865 [2024-04-17 15:38:13.855361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:15:46.865 [2024-04-17 15:38:13.855382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:75304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.865 [2024-04-17 15:38:13.855397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.865 [2024-04-17 15:38:13.855418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:75312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.865 [2024-04-17 15:38:13.855438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:46.865 [2024-04-17 15:38:13.855460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:75320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.865 [2024-04-17 15:38:13.855475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:15:46.865 [2024-04-17 15:38:13.855502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:75712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.865 [2024-04-17 15:38:13.855516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:15:46.865 [2024-04-17 15:38:13.855537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.865 [2024-04-17 15:38:13.855551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:15:46.865 [2024-04-17 15:38:13.855573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:75728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.865 [2024-04-17 15:38:13.855587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:15:46.865 [2024-04-17 15:38:13.855608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:75736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.865 [2024-04-17 15:38:13.855623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:15:46.865 [2024-04-17 15:38:13.855644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:75744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.865 [2024-04-17 15:38:13.855658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:15:46.865 [2024-04-17 15:38:13.855679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.865 [2024-04-17 15:38:13.855693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:15:46.865 [2024-04-17 15:38:13.855714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:75760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.865 [2024-04-17 15:38:13.855734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:15:46.865 [2024-04-17 15:38:13.856038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:75768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.865 [2024-04-17 15:38:13.856064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:15:46.865 [2024-04-17 15:38:13.856090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:75776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.865 [2024-04-17 15:38:13.856106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:15:46.865 [2024-04-17 15:38:13.856128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:75784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.865 [2024-04-17 15:38:13.856142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:15:46.865 [2024-04-17 15:38:13.856164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:75792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.865 [2024-04-17 15:38:13.856190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:15:46.865 [2024-04-17 15:38:13.856214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:75800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.865 [2024-04-17 15:38:13.856228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:15:46.865 [2024-04-17 15:38:13.856250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:75808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.865 [2024-04-17 15:38:13.856264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:15:46.865 [2024-04-17 15:38:13.856285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:75816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.865 [2024-04-17 15:38:13.856299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:15:46.865 [2024-04-17 15:38:13.856321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.865 [2024-04-17 15:38:13.856335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:15:46.865 [2024-04-17 15:38:13.856365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:75832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.865 [2024-04-17 15:38:13.856379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:15:46.865 [2024-04-17 15:38:13.856406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:75840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.865 [2024-04-17 15:38:13.856421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:15:46.865 [2024-04-17 15:38:13.856442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:75848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.865 [2024-04-17 15:38:13.856456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:15:46.865 [2024-04-17 15:38:13.856477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.865 [2024-04-17 15:38:13.856492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:15:46.865 [2024-04-17 15:38:13.856513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:75864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.865 [2024-04-17 15:38:13.856527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:15:46.865 [2024-04-17 15:38:13.856548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:75872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.865 [2024-04-17 15:38:13.856562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:15:46.865 [2024-04-17 15:38:13.856585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:75880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.865 [2024-04-17 15:38:13.856599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:15:46.865 [2024-04-17 15:38:13.856620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:75888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.865 [2024-04-17 15:38:13.856640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:15:46.865 [2024-04-17 15:38:13.856668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:75896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.865 [2024-04-17 15:38:13.856683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:15:46.865 [2024-04-17 15:38:13.856705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:75328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.865 [2024-04-17 15:38:13.856719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:15:46.866 [2024-04-17 15:38:13.856740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:75336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.866 [2024-04-17 15:38:13.856769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:15:46.866 [2024-04-17 15:38:13.856794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:75344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.866 [2024-04-17 15:38:13.856809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:15:46.866 [2024-04-17 15:38:13.856830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:75352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.866 [2024-04-17 15:38:13.856844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:15:46.866 [2024-04-17 15:38:13.856866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:75360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.866 [2024-04-17 15:38:13.856880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:15:46.866 [2024-04-17 15:38:13.856901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:75368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.866 [2024-04-17 15:38:13.856915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:15:46.866 [2024-04-17 15:38:13.856936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:75376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.866 [2024-04-17 15:38:13.856951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:46.866 [2024-04-17 15:38:13.856972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:75384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.866 [2024-04-17 15:38:13.856986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:46.866 [2024-04-17 15:38:13.857013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.866 [2024-04-17 15:38:13.857028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:46.866 [2024-04-17 15:38:13.857049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.866 [2024-04-17 15:38:13.857063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:46.866 [2024-04-17 15:38:13.857085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.866 [2024-04-17 15:38:13.857099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:46.866 [2024-04-17 15:38:13.857129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.866 [2024-04-17 15:38:13.857144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:46.866 [2024-04-17 15:38:13.857165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.866 [2024-04-17 15:38:13.857180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:46.866 [2024-04-17 15:38:13.857201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.866 [2024-04-17 15:38:13.857215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:46.866 [2024-04-17 15:38:13.857236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.866 [2024-04-17 15:38:13.857251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:15:46.866 [2024-04-17 15:38:13.857276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.866 [2024-04-17 15:38:13.857291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:15:46.866 [2024-04-17 15:38:13.857313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.866 [2024-04-17 15:38:13.857327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:15:46.866 [2024-04-17 15:38:13.857348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.866 [2024-04-17 15:38:13.857362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:46.866 [2024-04-17 15:38:13.857383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:75984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.866 [2024-04-17 15:38:13.857397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:46.866 [2024-04-17 15:38:13.857418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.866 [2024-04-17 15:38:13.857432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:46.866 [2024-04-17 15:38:13.857460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.866 [2024-04-17 15:38:13.857473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:46.866 [2024-04-17 15:38:13.857495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.866 [2024-04-17 15:38:13.857509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:15:46.866 [2024-04-17 15:38:13.857529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.866 [2024-04-17 15:38:13.867439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:15:46.866 [2024-04-17 15:38:13.867531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.866 [2024-04-17 15:38:13.867564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:15:46.866 [2024-04-17 15:38:13.867589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:75392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.866 [2024-04-17 15:38:13.867604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:15:46.866 [2024-04-17 15:38:13.867625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:75400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.866 [2024-04-17 15:38:13.867639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:15:46.866 [2024-04-17 15:38:13.867660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:75408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.866 [2024-04-17 15:38:13.867673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:15:46.866 [2024-04-17 15:38:13.867695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:75416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.866 [2024-04-17 15:38:13.867708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:15:46.866 [2024-04-17 15:38:13.867729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:75424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.866 [2024-04-17 15:38:13.867742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:15:46.866 [2024-04-17 15:38:13.867763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.866 [2024-04-17 15:38:13.867793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:15:46.866 [2024-04-17 15:38:13.867815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:75440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.866 [2024-04-17 15:38:13.867830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:15:46.866 [2024-04-17 15:38:13.867852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:75448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.866 [2024-04-17 15:38:13.867866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:15:46.866 [2024-04-17 15:38:13.867888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.866 [2024-04-17 15:38:13.867902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:15:46.866 [2024-04-17 15:38:13.867923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.866 [2024-04-17 15:38:13.867936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:15:46.866 [2024-04-17 15:38:13.867957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.866 [2024-04-17 15:38:13.867971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:15:46.866 [2024-04-17 15:38:13.867992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.866 [2024-04-17 15:38:13.868014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:15:46.866 [2024-04-17 15:38:13.868037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.866 [2024-04-17 15:38:13.868051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:15:46.866 [2024-04-17 15:38:13.868071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.866 [2024-04-17 15:38:13.868085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:15:46.866 [2024-04-17 15:38:13.868106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.866 [2024-04-17 15:38:13.868120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:46.866 [2024-04-17 15:38:13.868147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.866 [2024-04-17 15:38:13.868164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:15:46.867 [2024-04-17 15:38:13.868185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.867 [2024-04-17 15:38:13.868199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:15:46.867 [2024-04-17 15:38:13.868220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.867 [2024-04-17 15:38:13.868234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:15:46.867 [2024-04-17 15:38:13.868255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.867 [2024-04-17 15:38:13.868269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:15:46.867 [2024-04-17 15:38:13.868290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.867 [2024-04-17 15:38:13.868303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:15:46.867 [2024-04-17 15:38:13.868324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.867 [2024-04-17 15:38:13.868338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:15:46.867 [2024-04-17 15:38:13.868359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.867 [2024-04-17 15:38:13.868373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:15:46.867 [2024-04-17 15:38:13.868393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.867 [2024-04-17 15:38:13.868408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:15:46.867 [2024-04-17 15:38:13.868429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.867 [2024-04-17 15:38:13.868443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:15:46.867 [2024-04-17 15:38:13.868471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:75456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.867 [2024-04-17 15:38:13.868485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:15:46.867 [2024-04-17 15:38:13.868506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:75464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.867 [2024-04-17 15:38:13.868519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:15:46.867 [2024-04-17 15:38:13.868544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:75472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.867 [2024-04-17 15:38:13.868558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:15:46.867 [2024-04-17 15:38:13.868579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:75480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.867 [2024-04-17 15:38:13.868592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:15:46.867 [2024-04-17 15:38:13.868613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:75488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.867 [2024-04-17 15:38:13.868627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:15:46.867 [2024-04-17 15:38:13.868648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:75496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.867 [2024-04-17 15:38:13.868662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:15:46.867 [2024-04-17 15:38:13.868683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:75504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.867 [2024-04-17 15:38:13.868696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:15:46.867 [2024-04-17 15:38:13.868717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:75512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.867 [2024-04-17 15:38:13.868731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:15:46.867 [2024-04-17 15:38:13.868764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:75520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.867 [2024-04-17 15:38:13.868782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:15:46.867 [2024-04-17 15:38:13.868803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.867 [2024-04-17 15:38:13.868818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:15:46.867 [2024-04-17 15:38:13.868839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:75536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.867 [2024-04-17 15:38:13.868852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:15:46.867 [2024-04-17 15:38:13.868873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:75544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.867 [2024-04-17 15:38:13.868887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:15:46.867 [2024-04-17 15:38:13.868915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:75552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.867 [2024-04-17 15:38:13.868930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:15:46.867 [2024-04-17 15:38:13.868952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:75560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.867 [2024-04-17 15:38:13.868966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:15:46.867 [2024-04-17 15:38:13.868986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:75568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.867 [2024-04-17 15:38:13.869000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:15:46.867 [2024-04-17 15:38:13.869021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:75576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.867 [2024-04-17 15:38:13.869035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:15:46.867 [2024-04-17 15:38:13.869055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:75584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.867 [2024-04-17 15:38:13.869069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:15:46.867 [2024-04-17 15:38:13.869090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:75592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.867 [2024-04-17 15:38:13.869104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:15:46.867 [2024-04-17 15:38:13.869125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:75600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.867 [2024-04-17 15:38:13.869138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:15:46.867 [2024-04-17 15:38:13.869160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:75608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.867 [2024-04-17 15:38:13.869174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:15:46.867 [2024-04-17 15:38:13.869195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:75616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.867 [2024-04-17 15:38:13.869208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:15:46.867 [2024-04-17 15:38:13.869485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.867 [2024-04-17 15:38:13.869513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:15:46.868 [2024-04-17 15:38:13.869537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:75632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.868 [2024-04-17 15:38:13.869551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:46.868 [2024-04-17 15:38:13.869573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:75640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.868 [2024-04-17 15:38:13.869588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:15:46.868 [2024-04-17 15:38:13.869608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.868 [2024-04-17 15:38:13.869637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:15:46.868 [2024-04-17 15:38:13.869660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.868 [2024-04-17 15:38:13.869692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:15:46.868 [2024-04-17 15:38:13.869721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.868 [2024-04-17 15:38:13.869739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:15:46.868 [2024-04-17 15:38:13.869783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.868 [2024-04-17 15:38:13.869805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:15:46.868 [2024-04-17 15:38:13.869836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.868 [2024-04-17 15:38:13.869855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:15:46.868 [2024-04-17 15:38:13.869884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.868 [2024-04-17 15:38:13.869903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:15:46.868 [2024-04-17 15:38:13.869931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.868 [2024-04-17 15:38:13.869950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:15:46.868 [2024-04-17 15:38:13.869980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.868 [2024-04-17 15:38:13.869998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:15:46.868 [2024-04-17 15:38:13.870027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:75648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.868 [2024-04-17 15:38:13.870045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:15:46.868 [2024-04-17 15:38:13.870074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:75656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.868 [2024-04-17 15:38:13.870093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:15:46.868 [2024-04-17 15:38:13.870121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:75664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.868 [2024-04-17 15:38:13.870140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:15:46.868 [2024-04-17 15:38:13.870168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:75672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.868 [2024-04-17 15:38:13.870187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:15:46.868 [2024-04-17 15:38:13.870216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:75680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.868 [2024-04-17 15:38:13.870244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:15:46.868 [2024-04-17 15:38:13.870274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:75688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.868 [2024-04-17 15:38:13.870649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:15:46.868 [2024-04-17 15:38:13.870705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:75696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.868 [2024-04-17 15:38:13.870743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:15:46.868 [2024-04-17 15:38:13.870799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:75704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.868 [2024-04-17 15:38:13.870820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:15:46.868 [2024-04-17 15:38:13.870849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:75200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.868 [2024-04-17 15:38:13.870868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:15:46.868 [2024-04-17 15:38:13.870898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:75208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.868 [2024-04-17 15:38:13.870916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:15:46.868 [2024-04-17 15:38:13.870945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:75216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.868 [2024-04-17 15:38:13.870963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:15:46.868 [2024-04-17 15:38:13.871001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:75224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.868 [2024-04-17 15:38:13.871020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:15:46.868 [2024-04-17 15:38:13.871049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:75232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.868 [2024-04-17 15:38:13.871086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:15:46.868 [2024-04-17 15:38:13.871118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:75240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.868 [2024-04-17 15:38:13.871137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:15:46.868 [2024-04-17 15:38:13.871166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:75248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.868 [2024-04-17 15:38:13.871185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:15:46.868 [2024-04-17 15:38:13.871214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:75256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.868 [2024-04-17 15:38:13.871233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:15:46.868 [2024-04-17 15:38:13.871262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:75264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.868 [2024-04-17 15:38:13.871281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:15:46.868 [2024-04-17 15:38:13.871324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:75272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.868 [2024-04-17 15:38:13.871344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:15:46.868 [2024-04-17 15:38:13.871373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:75280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.868 [2024-04-17 15:38:13.871391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:15:46.868 [2024-04-17 15:38:13.871420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:75288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.868 [2024-04-17 15:38:13.871439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:15:46.868 [2024-04-17 15:38:13.871468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:75296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.868 [2024-04-17 15:38:13.871486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:15:46.868 [2024-04-17 15:38:13.871515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:75304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.868 [2024-04-17 15:38:13.871534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.868 [2024-04-17 15:38:13.871570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:75312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.868 [2024-04-17 15:38:13.871589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:46.868 [2024-04-17 15:38:13.871617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:75320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.868 [2024-04-17 15:38:13.871637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:15:46.868 [2024-04-17 15:38:13.871665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:75712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.868 [2024-04-17 15:38:13.871684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:15:46.868 [2024-04-17 15:38:13.871713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.868 [2024-04-17 15:38:13.871732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:15:46.868 [2024-04-17 15:38:13.871774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:75728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.868 [2024-04-17 15:38:13.871797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:15:46.868 [2024-04-17 15:38:13.871826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:75736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.868 [2024-04-17 15:38:13.871845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:15:46.868 [2024-04-17 15:38:13.871874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:75744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.868 [2024-04-17 15:38:13.871892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:15:46.868 [2024-04-17 15:38:13.871932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.869 [2024-04-17 15:38:13.871951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:15:46.869 [2024-04-17 15:38:13.874388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:75760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.869 [2024-04-17 15:38:13.874435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:15:46.869 [2024-04-17 15:38:13.874475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:75768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.869 [2024-04-17 15:38:13.874496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:15:46.869 [2024-04-17 15:38:13.874526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:75776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.869 [2024-04-17 15:38:13.874545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:15:46.869 [2024-04-17 15:38:13.874574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:75784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.869 [2024-04-17 15:38:13.874593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:15:46.869 [2024-04-17 15:38:13.874622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.869 [2024-04-17 15:38:13.874641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:15:46.869 [2024-04-17 15:38:13.874670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:75800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.869 [2024-04-17 15:38:13.874689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:15:46.869 [2024-04-17 15:38:13.874720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:75808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.869 [2024-04-17 15:38:13.874771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:15:46.869 [2024-04-17 15:38:13.874808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:75816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.869 [2024-04-17 15:38:13.874828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:15:46.869 [2024-04-17 15:38:13.874858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:75824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.869 [2024-04-17 15:38:13.874877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:15:46.869 [2024-04-17 15:38:13.874905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:75832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.869 [2024-04-17 15:38:13.874924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:15:46.869 [2024-04-17 15:38:13.874953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:75840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.869 [2024-04-17 15:38:13.874981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:15:46.869 [2024-04-17 15:38:13.875010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:75848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.869 [2024-04-17 15:38:13.875044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:15:46.869 [2024-04-17 15:38:13.875089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.869 [2024-04-17 15:38:13.875113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:15:46.869 [2024-04-17 15:38:13.875142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:75864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.869 [2024-04-17 15:38:13.875161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:15:46.869 [2024-04-17 15:38:13.875190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:75872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.869 [2024-04-17 15:38:13.875209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:15:46.869 [2024-04-17 15:38:13.875237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:75880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.869 [2024-04-17 15:38:13.875256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:15:46.869 [2024-04-17 15:38:13.875285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:75888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.869 [2024-04-17 15:38:13.875303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:15:46.869 [2024-04-17 15:38:13.875332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:75896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.869 [2024-04-17 15:38:13.875351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:15:46.869 [2024-04-17 15:38:13.875380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:75328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.869 [2024-04-17 15:38:13.875398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:15:46.869 [2024-04-17 15:38:13.875427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:75336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.869 [2024-04-17 15:38:13.875446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:15:46.869 [2024-04-17 15:38:13.875474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:75344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.869 [2024-04-17 15:38:13.875494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:15:46.869 [2024-04-17 15:38:13.875522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:75352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.869 [2024-04-17 15:38:13.875541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:15:46.869 [2024-04-17 15:38:13.875581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:75360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.869 [2024-04-17 15:38:13.875600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:15:46.869 [2024-04-17 15:38:13.875628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:75368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.869 [2024-04-17 15:38:13.875657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:15:46.869 [2024-04-17 15:38:13.875687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:75376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.869 [2024-04-17 15:38:13.875706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:46.869 [2024-04-17 15:38:13.875735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:75384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.869 [2024-04-17 15:38:13.875767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:46.869 [2024-04-17 15:38:13.875799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.869 [2024-04-17 15:38:13.875819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:46.869 [2024-04-17 15:38:13.875848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.869 [2024-04-17 15:38:13.875866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:46.869 [2024-04-17 15:38:13.875895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.869 [2024-04-17 15:38:13.875914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:46.869 [2024-04-17 15:38:13.875943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.869 [2024-04-17 15:38:13.875961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:46.869 [2024-04-17 15:38:13.875990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.869 [2024-04-17 15:38:13.876009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:46.869 [2024-04-17 15:38:13.876044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.869 [2024-04-17 15:38:13.876064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:46.869 [2024-04-17 15:38:13.876103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.869 [2024-04-17 15:38:13.876122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:15:46.869 [2024-04-17 15:38:13.876150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.869 [2024-04-17 15:38:13.876169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:15:46.869 [2024-04-17 15:38:13.876198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.869 [2024-04-17 15:38:13.876217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:15:46.869 [2024-04-17 15:38:13.876245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.869 [2024-04-17 15:38:13.876264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:46.869 [2024-04-17 15:38:13.876303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:75984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.869 [2024-04-17 15:38:13.876323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:46.869 [2024-04-17 15:38:13.876352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.869 [2024-04-17 15:38:13.876371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:46.869 [2024-04-17 15:38:13.876399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.870 [2024-04-17 15:38:13.876418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:46.870 [2024-04-17 15:38:13.876446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.870 [2024-04-17 15:38:13.876465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:15:46.870 [2024-04-17 15:38:13.876494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.870 [2024-04-17 15:38:13.876512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:15:46.870 [2024-04-17 15:38:13.876541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.870 [2024-04-17 15:38:13.876560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:15:46.870 [2024-04-17 15:38:13.876588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:75392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.870 [2024-04-17 15:38:13.876607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:15:46.870 [2024-04-17 15:38:13.876645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:75400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.870 [2024-04-17 15:38:13.876664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:15:46.870 [2024-04-17 15:38:13.876692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:75408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.870 [2024-04-17 15:38:13.876711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:15:46.870 [2024-04-17 15:38:13.876740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:75416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.870 [2024-04-17 15:38:13.876776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:15:46.870 [2024-04-17 15:38:13.876806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:75424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.870 [2024-04-17 15:38:13.876826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:15:46.870 [2024-04-17 15:38:13.876854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:75432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.870 [2024-04-17 15:38:13.876873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:15:46.870 [2024-04-17 15:38:13.876911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:75440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.870 [2024-04-17 15:38:13.876931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:15:46.870 [2024-04-17 15:38:13.876960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:75448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.870 [2024-04-17 15:38:13.876979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:15:46.870 [2024-04-17 15:38:13.877008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.870 [2024-04-17 15:38:13.877026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:15:46.870 [2024-04-17 15:38:13.877055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.870 [2024-04-17 15:38:13.877074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:15:46.870 [2024-04-17 15:38:13.877103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.870 [2024-04-17 15:38:13.877122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:15:46.870 [2024-04-17 15:38:13.877150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.870 [2024-04-17 15:38:13.877169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:15:46.870 [2024-04-17 15:38:13.877198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.870 [2024-04-17 15:38:13.877217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:15:46.870 [2024-04-17 15:38:13.877251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.870 [2024-04-17 15:38:13.877271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:15:46.870 [2024-04-17 15:38:13.877300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.870 [2024-04-17 15:38:13.877326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:46.870 [2024-04-17 15:38:13.877354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.870 [2024-04-17 15:38:13.877373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:15:46.870 [2024-04-17 15:38:13.877402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.870 [2024-04-17 15:38:13.877420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:15:46.870 [2024-04-17 15:38:13.877449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.870 [2024-04-17 15:38:13.877468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:15:46.870 [2024-04-17 15:38:13.877497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.870 [2024-04-17 15:38:13.877527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:15:46.870 [2024-04-17 15:38:13.877557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.870 [2024-04-17 15:38:13.877576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:15:46.870 [2024-04-17 15:38:13.877605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.870 [2024-04-17 15:38:13.877624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:15:46.870 [2024-04-17 15:38:13.877653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.870 [2024-04-17 15:38:13.877671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:15:46.870 [2024-04-17 15:38:13.877700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.870 [2024-04-17 15:38:13.877719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:15:46.870 [2024-04-17 15:38:13.877747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.870 [2024-04-17 15:38:13.877789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:15:46.870 [2024-04-17 15:38:13.877819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:75456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.870 [2024-04-17 15:38:13.877838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:15:46.870 [2024-04-17 15:38:13.877867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.870 [2024-04-17 15:38:13.877887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:15:46.870 [2024-04-17 15:38:13.877922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.870 [2024-04-17 15:38:13.877941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:15:46.870 [2024-04-17 15:38:13.877970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.870 [2024-04-17 15:38:13.877988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:15:46.870 [2024-04-17 15:38:13.878018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:75488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.870 [2024-04-17 15:38:13.878036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:15:46.870 [2024-04-17 15:38:13.878065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.870 [2024-04-17 15:38:13.878084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:15:46.870 [2024-04-17 15:38:13.878112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:75504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.870 [2024-04-17 15:38:13.878140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:15:46.870 [2024-04-17 15:38:13.878170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:75512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.870 [2024-04-17 15:38:13.878189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:15:46.870 [2024-04-17 15:38:13.878218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:75520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.870 [2024-04-17 15:38:13.878237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:15:46.870 [2024-04-17 15:38:13.878266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:75528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.870 [2024-04-17 15:38:13.878284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:15:46.870 [2024-04-17 15:38:13.878314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:75536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.870 [2024-04-17 15:38:13.878332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:15:46.870 [2024-04-17 15:38:13.878361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:75544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.871 [2024-04-17 15:38:13.878379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:15:46.871 [2024-04-17 15:38:13.878408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:75552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.871 [2024-04-17 15:38:13.878427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:15:46.871 [2024-04-17 15:38:13.878466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:75560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.871 [2024-04-17 15:38:13.878484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:15:46.871 [2024-04-17 15:38:13.878513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.871 [2024-04-17 15:38:13.878532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:15:46.871 [2024-04-17 15:38:13.878561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:75576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.871 [2024-04-17 15:38:13.878580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:15:46.871 [2024-04-17 15:38:13.878608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:75584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.871 [2024-04-17 15:38:13.878627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:15:46.871 [2024-04-17 15:38:13.878656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:75592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.871 [2024-04-17 15:38:13.878675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:15:46.871 [2024-04-17 15:38:13.878704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:75600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.871 [2024-04-17 15:38:13.878748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:15:46.871 [2024-04-17 15:38:13.878802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:75608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.871 [2024-04-17 15:38:13.878823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:15:46.871 [2024-04-17 15:38:13.878852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:75616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.871 [2024-04-17 15:38:13.878871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:15:46.871 [2024-04-17 15:38:13.878900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:75624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.871 [2024-04-17 15:38:13.878918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:15:46.871 [2024-04-17 15:38:13.878947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:75632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.871 [2024-04-17 15:38:13.878966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:46.871 [2024-04-17 15:38:13.878999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:75640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.871 [2024-04-17 15:38:13.879018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:15:46.871 [2024-04-17 15:38:13.879046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.871 [2024-04-17 15:38:13.879078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:15:46.871 [2024-04-17 15:38:13.879111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.871 [2024-04-17 15:38:13.879130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:15:46.871 [2024-04-17 15:38:13.879160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.871 [2024-04-17 15:38:13.879178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:15:46.871 [2024-04-17 15:38:13.879207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.871 [2024-04-17 15:38:13.879226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:15:46.871 [2024-04-17 15:38:13.879255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.871 [2024-04-17 15:38:13.879274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:15:46.871 [2024-04-17 15:38:13.879309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.871 [2024-04-17 15:38:13.879329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:15:46.871 [2024-04-17 15:38:13.879358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.871 [2024-04-17 15:38:13.879376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:15:46.871 [2024-04-17 15:38:13.879416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.871 [2024-04-17 15:38:13.879436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:15:46.871 [2024-04-17 15:38:13.879465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:75648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.871 [2024-04-17 15:38:13.879484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:15:46.871 [2024-04-17 15:38:13.879513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:75656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.871 [2024-04-17 15:38:13.879531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:15:46.871 [2024-04-17 15:38:13.879560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:75664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.871 [2024-04-17 15:38:13.879579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:15:46.871 [2024-04-17 15:38:13.879607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:75672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.871 [2024-04-17 15:38:13.879626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:15:46.871 [2024-04-17 15:38:13.879655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:75680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.871 [2024-04-17 15:38:13.879674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:15:46.871 [2024-04-17 15:38:13.879702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:75688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.871 [2024-04-17 15:38:13.879721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:15:46.871 [2024-04-17 15:38:13.879763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:75696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.871 [2024-04-17 15:38:13.879785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:15:46.871 [2024-04-17 15:38:13.879815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:75704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.871 [2024-04-17 15:38:13.879834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:15:46.871 [2024-04-17 15:38:13.879863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:75200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.871 [2024-04-17 15:38:13.879882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:15:46.871 [2024-04-17 15:38:13.879911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:75208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.871 [2024-04-17 15:38:13.879929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:15:46.871 [2024-04-17 15:38:13.879958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:75216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.871 [2024-04-17 15:38:13.879977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:15:46.871 [2024-04-17 15:38:13.880015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:75224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.871 [2024-04-17 15:38:13.880035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:15:46.871 [2024-04-17 15:38:13.880064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:75232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.871 [2024-04-17 15:38:13.880082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:15:46.871 [2024-04-17 15:38:13.880121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:75240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.871 [2024-04-17 15:38:13.880140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:15:46.871 [2024-04-17 15:38:13.880169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:75248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.871 [2024-04-17 15:38:13.880187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:15:46.871 [2024-04-17 15:38:13.880216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:75256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.871 [2024-04-17 15:38:13.880234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:15:46.871 [2024-04-17 15:38:13.880263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:75264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.871 [2024-04-17 15:38:13.880282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:15:46.871 [2024-04-17 15:38:13.880311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:75272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.871 [2024-04-17 15:38:13.880330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:15:46.871 [2024-04-17 15:38:13.880358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:75280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.871 [2024-04-17 15:38:13.880377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:15:46.872 [2024-04-17 15:38:13.880406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:75288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.872 [2024-04-17 15:38:13.880425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:15:46.872 [2024-04-17 15:38:13.880453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:75296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.872 [2024-04-17 15:38:13.880472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:15:46.872 [2024-04-17 15:38:13.880501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:75304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.872 [2024-04-17 15:38:13.880520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.872 [2024-04-17 15:38:13.880549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:75312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.872 [2024-04-17 15:38:13.880567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:46.872 [2024-04-17 15:38:13.880596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:75320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.872 [2024-04-17 15:38:13.880622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:15:46.872 [2024-04-17 15:38:13.880652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:75712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.872 [2024-04-17 15:38:13.880671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:15:46.872 [2024-04-17 15:38:13.880699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.872 [2024-04-17 15:38:13.880719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:15:46.872 [2024-04-17 15:38:13.880747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:75728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.872 [2024-04-17 15:38:13.880785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:15:46.872 [2024-04-17 15:38:13.880815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:75736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.872 [2024-04-17 15:38:13.880835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:15:46.872 [2024-04-17 15:38:13.880864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:75744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.872 [2024-04-17 15:38:13.880883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:15:46.872 [2024-04-17 15:38:13.883216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.872 [2024-04-17 15:38:13.883270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:15:46.872 [2024-04-17 15:38:13.883334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:75760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.872 [2024-04-17 15:38:13.883359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:15:46.872 [2024-04-17 15:38:13.883389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:75768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.872 [2024-04-17 15:38:13.883408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:15:46.872 [2024-04-17 15:38:13.883437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:75776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.872 [2024-04-17 15:38:13.883470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:15:46.872 [2024-04-17 15:38:13.883491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:75784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.872 [2024-04-17 15:38:13.883506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:15:46.872 [2024-04-17 15:38:13.883526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:75792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.872 [2024-04-17 15:38:13.883541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:15:46.872 [2024-04-17 15:38:13.883563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:75800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.872 [2024-04-17 15:38:13.883588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:15:46.872 [2024-04-17 15:38:13.883611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.872 [2024-04-17 15:38:13.883626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:15:46.872 [2024-04-17 15:38:13.883647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:75816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.872 [2024-04-17 15:38:13.883661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:15:46.872 [2024-04-17 15:38:13.883682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:75824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.872 [2024-04-17 15:38:13.883696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:15:46.872 [2024-04-17 15:38:13.883717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:75832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.872 [2024-04-17 15:38:13.883731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:15:46.872 [2024-04-17 15:38:13.883751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:75840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.872 [2024-04-17 15:38:13.883778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:15:46.872 [2024-04-17 15:38:13.883802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:75848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.872 [2024-04-17 15:38:13.883816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:15:46.872 [2024-04-17 15:38:13.883837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.872 [2024-04-17 15:38:13.883851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:15:46.872 [2024-04-17 15:38:13.883872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:75864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.872 [2024-04-17 15:38:13.883886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:15:46.872 [2024-04-17 15:38:13.883907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:75872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.872 [2024-04-17 15:38:13.883921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:15:46.872 [2024-04-17 15:38:13.883942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:75880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.872 [2024-04-17 15:38:13.883956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:15:46.872 [2024-04-17 15:38:13.883977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:75888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.872 [2024-04-17 15:38:13.883991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:15:46.872 [2024-04-17 15:38:13.884013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:75896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.872 [2024-04-17 15:38:13.884027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:15:46.872 [2024-04-17 15:38:13.884056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:75328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.872 [2024-04-17 15:38:13.884071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:15:46.872 [2024-04-17 15:38:13.884091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:75336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.872 [2024-04-17 15:38:13.884105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:15:46.872 [2024-04-17 15:38:13.884126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:75344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.872 [2024-04-17 15:38:13.884140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:15:46.872 [2024-04-17 15:38:13.884161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:75352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.872 [2024-04-17 15:38:13.884175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:15:46.872 [2024-04-17 15:38:13.884196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:75360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.873 [2024-04-17 15:38:13.884210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:15:46.873 [2024-04-17 15:38:13.884231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:75368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.873 [2024-04-17 15:38:13.884245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:15:46.873 [2024-04-17 15:38:13.884266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:75376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.873 [2024-04-17 15:38:13.884280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:46.873 [2024-04-17 15:38:13.884301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:75384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.873 [2024-04-17 15:38:13.884315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:46.873 [2024-04-17 15:38:13.884336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.873 [2024-04-17 15:38:13.884349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:46.873 [2024-04-17 15:38:13.884370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.873 [2024-04-17 15:38:13.884384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:46.873 [2024-04-17 15:38:13.884405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.873 [2024-04-17 15:38:13.884418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:46.873 [2024-04-17 15:38:13.884440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.873 [2024-04-17 15:38:13.884453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:46.873 [2024-04-17 15:38:13.884485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.873 [2024-04-17 15:38:13.884501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:46.873 [2024-04-17 15:38:13.884522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.873 [2024-04-17 15:38:13.884537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:46.873 [2024-04-17 15:38:13.884558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.873 [2024-04-17 15:38:13.884572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:15:46.873 [2024-04-17 15:38:13.884593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.873 [2024-04-17 15:38:13.884607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:15:46.873 [2024-04-17 15:38:13.884628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.873 [2024-04-17 15:38:13.884642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:15:46.873 [2024-04-17 15:38:13.884663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.873 [2024-04-17 15:38:13.884676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:46.873 [2024-04-17 15:38:13.884698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:75984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.873 [2024-04-17 15:38:13.884712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:46.873 [2024-04-17 15:38:13.884732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.873 [2024-04-17 15:38:13.884746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:46.873 [2024-04-17 15:38:13.884781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.873 [2024-04-17 15:38:13.884796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:46.873 [2024-04-17 15:38:13.884817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.873 [2024-04-17 15:38:13.884831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:15:46.873 [2024-04-17 15:38:13.884852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.873 [2024-04-17 15:38:13.884866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:15:46.873 [2024-04-17 15:38:13.884887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.873 [2024-04-17 15:38:13.884901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:15:46.873 [2024-04-17 15:38:13.884922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:75392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.873 [2024-04-17 15:38:13.884944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:15:46.873 [2024-04-17 15:38:13.884966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:75400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.873 [2024-04-17 15:38:13.884980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:15:46.873 [2024-04-17 15:38:13.885002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:75408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.873 [2024-04-17 15:38:13.885016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:15:46.873 [2024-04-17 15:38:13.885037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.873 [2024-04-17 15:38:13.885050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:15:46.873 [2024-04-17 15:38:13.885071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:75424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.873 [2024-04-17 15:38:13.885085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:15:46.873 [2024-04-17 15:38:13.885106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:75432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.873 [2024-04-17 15:38:13.885120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:15:46.873 [2024-04-17 15:38:13.885141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:75440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.873 [2024-04-17 15:38:13.885155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:15:46.873 [2024-04-17 15:38:13.885176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:75448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.873 [2024-04-17 15:38:13.885190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:15:46.873 [2024-04-17 15:38:13.885211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.873 [2024-04-17 15:38:13.885224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:15:46.873 [2024-04-17 15:38:13.885246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.873 [2024-04-17 15:38:13.885260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:15:46.873 [2024-04-17 15:38:13.885281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.873 [2024-04-17 15:38:13.885295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:15:46.873 [2024-04-17 15:38:13.885316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.873 [2024-04-17 15:38:13.885330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:15:46.873 [2024-04-17 15:38:13.885527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.873 [2024-04-17 15:38:13.885562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:15:46.873 [2024-04-17 15:38:13.885592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.873 [2024-04-17 15:38:13.885607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:15:46.873 [2024-04-17 15:38:13.885632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.873 [2024-04-17 15:38:13.885646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:46.873 [2024-04-17 15:38:13.885670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.873 [2024-04-17 15:38:13.885685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:15:46.873 [2024-04-17 15:38:13.885710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.873 [2024-04-17 15:38:13.885723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:15:46.873 [2024-04-17 15:38:13.885748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.873 [2024-04-17 15:38:13.885778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:15:46.873 [2024-04-17 15:38:13.885804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.873 [2024-04-17 15:38:13.885818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:15:46.873 [2024-04-17 15:38:13.885843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.874 [2024-04-17 15:38:13.885857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:15:46.874 [2024-04-17 15:38:13.885882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.874 [2024-04-17 15:38:13.885896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:15:46.874 [2024-04-17 15:38:13.885920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.874 [2024-04-17 15:38:13.885934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:15:46.874 [2024-04-17 15:38:13.885959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.874 [2024-04-17 15:38:13.885973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:15:46.874 [2024-04-17 15:38:13.885997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.874 [2024-04-17 15:38:13.886011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:15:46.874 [2024-04-17 15:38:13.886036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:75456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.874 [2024-04-17 15:38:13.886050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:15:46.874 [2024-04-17 15:38:13.886083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:75464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.874 [2024-04-17 15:38:13.886098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:15:46.874 [2024-04-17 15:38:13.886122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:75472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.874 [2024-04-17 15:38:13.886136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:15:46.874 [2024-04-17 15:38:13.886160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:75480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.874 [2024-04-17 15:38:13.886174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:15:46.874 [2024-04-17 15:38:13.886199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:75488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.874 [2024-04-17 15:38:13.886213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:15:46.874 [2024-04-17 15:38:13.886237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:75496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.874 [2024-04-17 15:38:13.886251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:15:46.874 [2024-04-17 15:38:13.886275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:75504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.874 [2024-04-17 15:38:13.886289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:15:46.874 [2024-04-17 15:38:13.886313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.874 [2024-04-17 15:38:13.886327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:15:46.874 [2024-04-17 15:38:13.886351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:75520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.874 [2024-04-17 15:38:13.886366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:15:46.874 [2024-04-17 15:38:13.886390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:75528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.874 [2024-04-17 15:38:13.886404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:15:46.874 [2024-04-17 15:38:13.886428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:75536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.874 [2024-04-17 15:38:13.886442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:15:46.874 [2024-04-17 15:38:13.886467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:75544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.874 [2024-04-17 15:38:13.886481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:15:46.874 [2024-04-17 15:38:13.886505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:75552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.874 [2024-04-17 15:38:13.886519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:15:46.874 [2024-04-17 15:38:13.886549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:75560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.874 [2024-04-17 15:38:13.886564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:15:46.874 [2024-04-17 15:38:13.886589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:75568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.874 [2024-04-17 15:38:13.886603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:15:46.874 [2024-04-17 15:38:13.886627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:75576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.874 [2024-04-17 15:38:13.886641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:15:46.874 [2024-04-17 15:38:13.886665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:75584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.874 [2024-04-17 15:38:13.886680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:15:46.874 [2024-04-17 15:38:13.886705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:75592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.874 [2024-04-17 15:38:13.886729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:15:46.874 [2024-04-17 15:38:13.886779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:75600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.874 [2024-04-17 15:38:13.886799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:15:46.874 [2024-04-17 15:38:13.886825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.874 [2024-04-17 15:38:13.886839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:15:46.874 [2024-04-17 15:38:13.886863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:75616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.874 [2024-04-17 15:38:13.886877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:15:46.874 [2024-04-17 15:38:13.886910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:75624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.874 [2024-04-17 15:38:13.886924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:15:46.874 [2024-04-17 15:38:13.886948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:75632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.874 [2024-04-17 15:38:13.886962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:46.874 [2024-04-17 15:38:13.886986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.874 [2024-04-17 15:38:13.887000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:15:46.874 [2024-04-17 15:38:13.887024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.874 [2024-04-17 15:38:13.887038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:15:46.874 [2024-04-17 15:38:13.887072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.874 [2024-04-17 15:38:13.887100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:15:46.874 [2024-04-17 15:38:13.887126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.874 [2024-04-17 15:38:13.887140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:15:46.874 [2024-04-17 15:38:13.887165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.874 [2024-04-17 15:38:13.887179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:15:46.874 [2024-04-17 15:38:13.887208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.874 [2024-04-17 15:38:13.887224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:15:46.874 [2024-04-17 15:38:13.887248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.874 [2024-04-17 15:38:13.887262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:15:46.874 [2024-04-17 15:38:13.887286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.874 [2024-04-17 15:38:13.887300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:15:46.874 [2024-04-17 15:38:13.887324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.874 [2024-04-17 15:38:13.887338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:15:46.874 [2024-04-17 15:38:13.887363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:75648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.874 [2024-04-17 15:38:13.887378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:15:46.874 [2024-04-17 15:38:13.887402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:75656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.874 [2024-04-17 15:38:13.887416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:15:46.874 [2024-04-17 15:38:13.887441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:75664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.875 [2024-04-17 15:38:13.887454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:15:46.875 [2024-04-17 15:38:13.887479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:75672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.875 [2024-04-17 15:38:13.887503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:15:46.875 [2024-04-17 15:38:13.887527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:75680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.875 [2024-04-17 15:38:13.887541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:15:46.875 [2024-04-17 15:38:13.887565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:75688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.875 [2024-04-17 15:38:13.887585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:15:46.875 [2024-04-17 15:38:13.887610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:75696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.875 [2024-04-17 15:38:13.887625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:15:46.875 [2024-04-17 15:38:13.887649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:75704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.875 [2024-04-17 15:38:13.887663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:15:46.875 [2024-04-17 15:38:13.887688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:75200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.875 [2024-04-17 15:38:13.887702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:15:46.875 [2024-04-17 15:38:13.887726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:75208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.875 [2024-04-17 15:38:13.887740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:15:46.875 [2024-04-17 15:38:13.887777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:75216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.875 [2024-04-17 15:38:13.887793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:15:46.875 [2024-04-17 15:38:13.887818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:75224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.875 [2024-04-17 15:38:13.887832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:15:46.875 [2024-04-17 15:38:13.887857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:75232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.875 [2024-04-17 15:38:13.887870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:15:46.875 [2024-04-17 15:38:13.887895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:75240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.875 [2024-04-17 15:38:13.887909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:15:46.875 [2024-04-17 15:38:13.887933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:75248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.875 [2024-04-17 15:38:13.887947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:15:46.875 [2024-04-17 15:38:13.887972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:75256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.875 [2024-04-17 15:38:13.887986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:15:46.875 [2024-04-17 15:38:13.888016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:75264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.875 [2024-04-17 15:38:13.888030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:15:46.875 [2024-04-17 15:38:13.888055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:75272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.875 [2024-04-17 15:38:13.888068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:15:46.875 [2024-04-17 15:38:13.888105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:75280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.875 [2024-04-17 15:38:13.888120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:15:46.875 [2024-04-17 15:38:13.888144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:75288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.875 [2024-04-17 15:38:13.888158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:15:46.875 [2024-04-17 15:38:13.888183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:75296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.875 [2024-04-17 15:38:13.888197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:15:46.875 [2024-04-17 15:38:13.888221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:75304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.875 [2024-04-17 15:38:13.888234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.875 [2024-04-17 15:38:13.888259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:75312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.875 [2024-04-17 15:38:13.888273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:46.875 [2024-04-17 15:38:13.888297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.875 [2024-04-17 15:38:13.888311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:15:46.875 [2024-04-17 15:38:13.888335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:75712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.875 [2024-04-17 15:38:13.888350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:15:46.875 [2024-04-17 15:38:13.888374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.875 [2024-04-17 15:38:13.888388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:15:46.875 [2024-04-17 15:38:13.888412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:75728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.875 [2024-04-17 15:38:13.888426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:15:46.875 [2024-04-17 15:38:13.888450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:75736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.875 [2024-04-17 15:38:13.888464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:15:46.875 [2024-04-17 15:38:13.888489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:75744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.875 [2024-04-17 15:38:13.888503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:15:46.875 [2024-04-17 15:38:27.199547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.875 [2024-04-17 15:38:27.199608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.875 [2024-04-17 15:38:27.199668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.875 [2024-04-17 15:38:27.199684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.875 [2024-04-17 15:38:27.199700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.875 [2024-04-17 15:38:27.199714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.875 [2024-04-17 15:38:27.199729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:99512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.875 [2024-04-17 15:38:27.199742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.875 [2024-04-17 15:38:27.199773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.875 [2024-04-17 15:38:27.199788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.875 [2024-04-17 15:38:27.199804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.875 [2024-04-17 15:38:27.199819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.875 [2024-04-17 15:38:27.199835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:99536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.875 [2024-04-17 15:38:27.199848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.875 [2024-04-17 15:38:27.199863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.875 [2024-04-17 15:38:27.199876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.875 [2024-04-17 15:38:27.199891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.875 [2024-04-17 15:38:27.199904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.875 [2024-04-17 15:38:27.199919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.875 [2024-04-17 15:38:27.199932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.875 [2024-04-17 15:38:27.199947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.875 [2024-04-17 15:38:27.199960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.875 [2024-04-17 15:38:27.199975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.875 [2024-04-17 15:38:27.199988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.875 [2024-04-17 15:38:27.200003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.875 [2024-04-17 15:38:27.200016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.876 [2024-04-17 15:38:27.200031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.876 [2024-04-17 15:38:27.200053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.876 [2024-04-17 15:38:27.200069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.876 [2024-04-17 15:38:27.200083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.876 [2024-04-17 15:38:27.200098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.876 [2024-04-17 15:38:27.200111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.876 [2024-04-17 15:38:27.200126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.876 [2024-04-17 15:38:27.200140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.876 [2024-04-17 15:38:27.200156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.876 [2024-04-17 15:38:27.200169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.876 [2024-04-17 15:38:27.200184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.876 [2024-04-17 15:38:27.200198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.876 [2024-04-17 15:38:27.200214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.876 [2024-04-17 15:38:27.200227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.876 [2024-04-17 15:38:27.200242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.876 [2024-04-17 15:38:27.200256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.876 [2024-04-17 15:38:27.200270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.876 [2024-04-17 15:38:27.200284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.876 [2024-04-17 15:38:27.200299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.876 [2024-04-17 15:38:27.200312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.876 [2024-04-17 15:38:27.200327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.876 [2024-04-17 15:38:27.200340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.876 [2024-04-17 15:38:27.200355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.876 [2024-04-17 15:38:27.200368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.876 [2024-04-17 15:38:27.200383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.876 [2024-04-17 15:38:27.200396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.876 [2024-04-17 15:38:27.200418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.876 [2024-04-17 15:38:27.200432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.876 [2024-04-17 15:38:27.200447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.876 [2024-04-17 15:38:27.200460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.876 [2024-04-17 15:38:27.200475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.876 [2024-04-17 15:38:27.200489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.876 [2024-04-17 15:38:27.200504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.876 [2024-04-17 15:38:27.200517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.876 [2024-04-17 15:38:27.200532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.876 [2024-04-17 15:38:27.200545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.876 [2024-04-17 15:38:27.200560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.876 [2024-04-17 15:38:27.200574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.876 [2024-04-17 15:38:27.200589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.876 [2024-04-17 15:38:27.200603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.876 [2024-04-17 15:38:27.200619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.876 [2024-04-17 15:38:27.200632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.876 [2024-04-17 15:38:27.200647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.876 [2024-04-17 15:38:27.200660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.876 [2024-04-17 15:38:27.200675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.876 [2024-04-17 15:38:27.200689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.876 [2024-04-17 15:38:27.200703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.876 [2024-04-17 15:38:27.200717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.876 [2024-04-17 15:38:27.200731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.876 [2024-04-17 15:38:27.200745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.876 [2024-04-17 15:38:27.200772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.876 [2024-04-17 15:38:27.200786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.876 [2024-04-17 15:38:27.200808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.876 [2024-04-17 15:38:27.200821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.876 [2024-04-17 15:38:27.200837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.876 [2024-04-17 15:38:27.200851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.876 [2024-04-17 15:38:27.200866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.876 [2024-04-17 15:38:27.200879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.876 [2024-04-17 15:38:27.200894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.876 [2024-04-17 15:38:27.200908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.876 [2024-04-17 15:38:27.200924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.876 [2024-04-17 15:38:27.200937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.876 [2024-04-17 15:38:27.200952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.876 [2024-04-17 15:38:27.200965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.876 [2024-04-17 15:38:27.200980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.876 [2024-04-17 15:38:27.200994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.876 [2024-04-17 15:38:27.201009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.876 [2024-04-17 15:38:27.201022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.876 [2024-04-17 15:38:27.201037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.877 [2024-04-17 15:38:27.201050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.877 [2024-04-17 15:38:27.201066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.877 [2024-04-17 15:38:27.201080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.877 [2024-04-17 15:38:27.201095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.877 [2024-04-17 15:38:27.201108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.877 [2024-04-17 15:38:27.201123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.877 [2024-04-17 15:38:27.201137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.877 [2024-04-17 15:38:27.201152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.877 [2024-04-17 15:38:27.201170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.877 [2024-04-17 15:38:27.201186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.877 [2024-04-17 15:38:27.201199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.877 [2024-04-17 15:38:27.201214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.877 [2024-04-17 15:38:27.201227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.877 [2024-04-17 15:38:27.201242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.877 [2024-04-17 15:38:27.201255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.877 [2024-04-17 15:38:27.201270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.877 [2024-04-17 15:38:27.201298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.877 [2024-04-17 15:38:27.201314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.877 [2024-04-17 15:38:27.201327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.877 [2024-04-17 15:38:27.201342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.877 [2024-04-17 15:38:27.201355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.877 [2024-04-17 15:38:27.201371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.877 [2024-04-17 15:38:27.201385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.877 [2024-04-17 15:38:27.201400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.877 [2024-04-17 15:38:27.201413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.877 [2024-04-17 15:38:27.201428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.877 [2024-04-17 15:38:27.201441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.877 [2024-04-17 15:38:27.201456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.877 [2024-04-17 15:38:27.201470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.877 [2024-04-17 15:38:27.201485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.877 [2024-04-17 15:38:27.201498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.877 [2024-04-17 15:38:27.201513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.877 [2024-04-17 15:38:27.201526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.877 [2024-04-17 15:38:27.201547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.877 [2024-04-17 15:38:27.201561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.877 [2024-04-17 15:38:27.201576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.877 [2024-04-17 15:38:27.201589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.877 [2024-04-17 15:38:27.201604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.877 [2024-04-17 15:38:27.201618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.877 [2024-04-17 15:38:27.201633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.877 [2024-04-17 15:38:27.201646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.877 [2024-04-17 15:38:27.201661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.877 [2024-04-17 15:38:27.201674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.877 [2024-04-17 15:38:27.201689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.877 [2024-04-17 15:38:27.201702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.877 [2024-04-17 15:38:27.201717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.877 [2024-04-17 15:38:27.201730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.877 [2024-04-17 15:38:27.201745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.877 [2024-04-17 15:38:27.201775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.877 [2024-04-17 15:38:27.201792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.877 [2024-04-17 15:38:27.201813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.877 [2024-04-17 15:38:27.201829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.877 [2024-04-17 15:38:27.201842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.877 [2024-04-17 15:38:27.201857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.877 [2024-04-17 15:38:27.201870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.877 [2024-04-17 15:38:27.201885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.877 [2024-04-17 15:38:27.201898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.877 [2024-04-17 15:38:27.201913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.877 [2024-04-17 15:38:27.201931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.877 [2024-04-17 15:38:27.201947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.877 [2024-04-17 15:38:27.201960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.877 [2024-04-17 15:38:27.201976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.877 [2024-04-17 15:38:27.201989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.877 [2024-04-17 15:38:27.202004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.877 [2024-04-17 15:38:27.202017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.877 [2024-04-17 15:38:27.202032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.877 [2024-04-17 15:38:27.202046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.877 [2024-04-17 15:38:27.202061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.877 [2024-04-17 15:38:27.202074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.877 [2024-04-17 15:38:27.202089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.877 [2024-04-17 15:38:27.202102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.877 [2024-04-17 15:38:27.202117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.877 [2024-04-17 15:38:27.202131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.877 [2024-04-17 15:38:27.202146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.877 [2024-04-17 15:38:27.202159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.877 [2024-04-17 15:38:27.202174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.877 [2024-04-17 15:38:27.202188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.877 [2024-04-17 15:38:27.202203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.877 [2024-04-17 15:38:27.202215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.877 [2024-04-17 15:38:27.202230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.878 [2024-04-17 15:38:27.202249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.878 [2024-04-17 15:38:27.202264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.878 [2024-04-17 15:38:27.202283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 15:38:47 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:46.878 p:0 m:0 dnr:0 00:15:46.878 [2024-04-17 15:38:27.202303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.878 [2024-04-17 15:38:27.202317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.878 [2024-04-17 15:38:27.202332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.878 [2024-04-17 15:38:27.202345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.878 [2024-04-17 15:38:27.202360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.878 [2024-04-17 15:38:27.202373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.878 [2024-04-17 15:38:27.202389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.878 [2024-04-17 15:38:27.202402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.878 [2024-04-17 15:38:27.202417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.878 [2024-04-17 15:38:27.202430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.878 [2024-04-17 15:38:27.202445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.878 [2024-04-17 15:38:27.202458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.878 [2024-04-17 15:38:27.202474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.878 [2024-04-17 15:38:27.202487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.878 [2024-04-17 15:38:27.202502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.878 [2024-04-17 15:38:27.202515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.878 [2024-04-17 15:38:27.202530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.878 [2024-04-17 15:38:27.202544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.878 [2024-04-17 15:38:27.202559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.878 [2024-04-17 15:38:27.202573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.878 [2024-04-17 15:38:27.202587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.878 [2024-04-17 15:38:27.202600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.878 [2024-04-17 15:38:27.202615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.878 [2024-04-17 15:38:27.202628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.878 [2024-04-17 15:38:27.202643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.878 [2024-04-17 15:38:27.202657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.878 [2024-04-17 15:38:27.202676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.878 [2024-04-17 15:38:27.202690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.878 [2024-04-17 15:38:27.202705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.878 [2024-04-17 15:38:27.202723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.878 [2024-04-17 15:38:27.202739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.878 [2024-04-17 15:38:27.202766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.878 [2024-04-17 15:38:27.202784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.878 [2024-04-17 15:38:27.202797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.878 [2024-04-17 15:38:27.202813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.878 [2024-04-17 15:38:27.202826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.878 [2024-04-17 15:38:27.202841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.878 [2024-04-17 15:38:27.202854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.878 [2024-04-17 15:38:27.202869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.878 [2024-04-17 15:38:27.202883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.878 [2024-04-17 15:38:27.202898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.878 [2024-04-17 15:38:27.202911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.878 [2024-04-17 15:38:27.202933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.878 [2024-04-17 15:38:27.202947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.878 [2024-04-17 15:38:27.202962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.878 [2024-04-17 15:38:27.202975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.878 [2024-04-17 15:38:27.202990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.878 [2024-04-17 15:38:27.203003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.878 [2024-04-17 15:38:27.203018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.878 [2024-04-17 15:38:27.203031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.878 [2024-04-17 15:38:27.203047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.878 [2024-04-17 15:38:27.203070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.878 [2024-04-17 15:38:27.203099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.878 [2024-04-17 15:38:27.203113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.878 [2024-04-17 15:38:27.203128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.878 [2024-04-17 15:38:27.203141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.878 [2024-04-17 15:38:27.203156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.878 [2024-04-17 15:38:27.203169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.878 [2024-04-17 15:38:27.203184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.878 [2024-04-17 15:38:27.203197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.878 [2024-04-17 15:38:27.203213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.878 [2024-04-17 15:38:27.203232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.878 [2024-04-17 15:38:27.203247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.878 [2024-04-17 15:38:27.203261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.878 [2024-04-17 15:38:27.203276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.878 [2024-04-17 15:38:27.203289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.878 [2024-04-17 15:38:27.203304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.878 [2024-04-17 15:38:27.203317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.878 [2024-04-17 15:38:27.203332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.878 [2024-04-17 15:38:27.203345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.878 [2024-04-17 15:38:27.203360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.878 [2024-04-17 15:38:27.203373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.878 [2024-04-17 15:38:27.203388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.878 [2024-04-17 15:38:27.203402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.878 [2024-04-17 15:38:27.203417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.879 [2024-04-17 15:38:27.203430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.879 [2024-04-17 15:38:27.203456] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e5710 is same with the state(5) to be set 00:15:46.879 [2024-04-17 15:38:27.203473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:46.879 [2024-04-17 15:38:27.203484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:46.879 [2024-04-17 15:38:27.203495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100072 len:8 PRP1 0x0 PRP2 0x0 00:15:46.879 [2024-04-17 15:38:27.203509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.879 [2024-04-17 15:38:27.203594] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14e5710 was disconnected and freed. reset controller. 00:15:46.879 [2024-04-17 15:38:27.203724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:46.879 [2024-04-17 15:38:27.203749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.879 [2024-04-17 15:38:27.203783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:46.879 [2024-04-17 15:38:27.203797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.879 [2024-04-17 15:38:27.203811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:46.879 [2024-04-17 15:38:27.203825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.879 [2024-04-17 15:38:27.203839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:46.879 [2024-04-17 15:38:27.203852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.879 [2024-04-17 15:38:27.203866] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14eba30 is same with the state(5) to be set 00:15:46.879 [2024-04-17 15:38:27.204985] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:46.879 [2024-04-17 15:38:27.205029] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14eba30 (9): Bad file descriptor 00:15:46.879 [2024-04-17 15:38:27.205408] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:46.879 [2024-04-17 15:38:27.205493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:46.879 [2024-04-17 15:38:27.205544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:46.879 [2024-04-17 15:38:27.205566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14eba30 with addr=10.0.0.2, port=4421 00:15:46.879 [2024-04-17 15:38:27.205582] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14eba30 is same with the state(5) to be set 00:15:46.879 [2024-04-17 15:38:27.205624] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14eba30 (9): Bad file descriptor 00:15:46.879 [2024-04-17 15:38:27.205655] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:15:46.879 [2024-04-17 15:38:27.205670] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:15:46.879 [2024-04-17 15:38:27.205685] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:46.879 [2024-04-17 15:38:27.205717] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:46.879 [2024-04-17 15:38:27.205733] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:46.879 [2024-04-17 15:38:37.280339] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:46.879 Received shutdown signal, test time was about 55.403267 seconds 00:15:46.879 00:15:46.879 Latency(us) 00:15:46.879 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.879 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:46.879 Verification LBA range: start 0x0 length 0x4000 00:15:46.879 Nvme0n1 : 55.40 7063.54 27.59 0.00 0.00 18091.80 997.93 7076934.75 00:15:46.879 =================================================================================================================== 00:15:46.879 Total : 7063.54 27.59 0.00 0.00 18091.80 997.93 7076934.75 00:15:46.879 15:38:48 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:15:46.879 15:38:48 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:46.879 15:38:48 -- host/multipath.sh@125 -- # nvmftestfini 00:15:46.879 15:38:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:46.879 15:38:48 -- nvmf/common.sh@117 -- # sync 00:15:46.879 15:38:48 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:46.879 15:38:48 -- nvmf/common.sh@120 -- # set +e 00:15:46.879 15:38:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:46.879 15:38:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:46.879 rmmod nvme_tcp 00:15:46.879 rmmod nvme_fabrics 00:15:46.879 rmmod nvme_keyring 00:15:46.879 15:38:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:46.879 15:38:48 -- nvmf/common.sh@124 -- # set -e 00:15:46.879 15:38:48 -- nvmf/common.sh@125 -- # return 0 00:15:46.879 15:38:48 -- nvmf/common.sh@478 -- # '[' -n 77058 ']' 00:15:46.879 15:38:48 -- nvmf/common.sh@479 -- # killprocess 77058 00:15:46.879 15:38:48 -- common/autotest_common.sh@936 -- # '[' -z 77058 ']' 00:15:46.879 15:38:48 -- common/autotest_common.sh@940 -- # kill -0 77058 00:15:46.879 15:38:48 -- common/autotest_common.sh@941 -- # uname 00:15:46.879 15:38:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:46.879 15:38:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77058 00:15:46.879 15:38:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:46.879 killing process with pid 77058 00:15:46.879 15:38:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:46.879 15:38:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77058' 00:15:46.879 15:38:48 -- common/autotest_common.sh@955 -- # kill 77058 00:15:46.879 15:38:48 -- common/autotest_common.sh@960 -- # wait 77058 00:15:47.137 15:38:48 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:47.137 15:38:48 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:47.137 15:38:48 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:47.137 15:38:48 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:47.137 15:38:48 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:47.137 15:38:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.137 15:38:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:47.137 15:38:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.137 15:38:48 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:47.137 00:15:47.137 real 1m1.730s 00:15:47.137 user 2m50.570s 00:15:47.137 sys 0m18.708s 00:15:47.137 15:38:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:47.137 ************************************ 00:15:47.137 15:38:48 -- common/autotest_common.sh@10 -- # set +x 00:15:47.137 END TEST nvmf_multipath 00:15:47.137 ************************************ 00:15:47.397 15:38:48 -- nvmf/nvmf.sh@115 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:15:47.397 15:38:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:47.397 15:38:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:47.397 15:38:48 -- common/autotest_common.sh@10 -- # set +x 00:15:47.397 ************************************ 00:15:47.397 START TEST nvmf_timeout 00:15:47.397 ************************************ 00:15:47.397 15:38:48 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:15:47.397 * Looking for test storage... 00:15:47.397 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:47.397 15:38:48 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:47.397 15:38:48 -- nvmf/common.sh@7 -- # uname -s 00:15:47.397 15:38:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:47.397 15:38:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:47.397 15:38:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:47.397 15:38:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:47.397 15:38:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:47.397 15:38:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:47.397 15:38:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:47.397 15:38:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:47.397 15:38:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:47.397 15:38:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:47.397 15:38:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02dfa913-00e4-4a25-ab2c-855f7283d4db 00:15:47.397 15:38:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=02dfa913-00e4-4a25-ab2c-855f7283d4db 00:15:47.397 15:38:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:47.397 15:38:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:47.397 15:38:48 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:47.397 15:38:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:47.397 15:38:48 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:47.397 15:38:48 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:47.397 15:38:48 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:47.397 15:38:48 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:47.397 15:38:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.397 15:38:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.397 15:38:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.397 15:38:48 -- paths/export.sh@5 -- # export PATH 00:15:47.397 15:38:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.397 15:38:48 -- nvmf/common.sh@47 -- # : 0 00:15:47.397 15:38:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:47.397 15:38:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:47.397 15:38:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:47.397 15:38:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:47.397 15:38:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:47.397 15:38:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:47.397 15:38:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:47.397 15:38:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:47.397 15:38:48 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:47.397 15:38:48 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:47.397 15:38:48 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:47.397 15:38:48 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:15:47.397 15:38:48 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:47.397 15:38:48 -- host/timeout.sh@19 -- # nvmftestinit 00:15:47.397 15:38:48 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:47.397 15:38:48 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:47.397 15:38:48 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:47.397 15:38:48 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:47.397 15:38:48 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:47.397 15:38:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.397 15:38:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:47.397 15:38:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.397 15:38:48 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:15:47.397 15:38:48 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:15:47.397 15:38:48 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:15:47.397 15:38:48 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:15:47.397 15:38:48 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:15:47.397 15:38:48 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:15:47.397 15:38:48 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:47.397 15:38:48 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:47.397 15:38:48 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:47.397 15:38:48 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:47.397 15:38:48 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:47.397 15:38:48 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:47.397 15:38:48 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:47.398 15:38:48 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:47.398 15:38:48 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:47.398 15:38:48 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:47.398 15:38:48 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:47.398 15:38:48 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:47.398 15:38:48 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:47.398 15:38:48 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:47.398 Cannot find device "nvmf_tgt_br" 00:15:47.398 15:38:48 -- nvmf/common.sh@155 -- # true 00:15:47.398 15:38:48 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:47.660 Cannot find device "nvmf_tgt_br2" 00:15:47.660 15:38:48 -- nvmf/common.sh@156 -- # true 00:15:47.660 15:38:48 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:47.660 15:38:48 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:47.660 Cannot find device "nvmf_tgt_br" 00:15:47.660 15:38:48 -- nvmf/common.sh@158 -- # true 00:15:47.660 15:38:48 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:47.660 Cannot find device "nvmf_tgt_br2" 00:15:47.660 15:38:48 -- nvmf/common.sh@159 -- # true 00:15:47.660 15:38:48 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:47.660 15:38:48 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:47.660 15:38:48 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:47.660 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:47.660 15:38:48 -- nvmf/common.sh@162 -- # true 00:15:47.660 15:38:48 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:47.660 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:47.660 15:38:48 -- nvmf/common.sh@163 -- # true 00:15:47.660 15:38:48 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:47.660 15:38:48 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:47.660 15:38:48 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:47.660 15:38:48 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:47.660 15:38:48 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:47.660 15:38:48 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:47.660 15:38:48 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:47.660 15:38:48 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:47.660 15:38:49 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:47.660 15:38:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:47.660 15:38:49 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:47.660 15:38:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:47.660 15:38:49 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:47.660 15:38:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:47.660 15:38:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:47.660 15:38:49 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:47.660 15:38:49 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:47.660 15:38:49 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:47.660 15:38:49 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:47.660 15:38:49 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:47.660 15:38:49 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:47.920 15:38:49 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:47.920 15:38:49 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:47.920 15:38:49 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:47.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:47.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:15:47.920 00:15:47.920 --- 10.0.0.2 ping statistics --- 00:15:47.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.920 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:15:47.920 15:38:49 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:47.920 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:47.920 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:15:47.920 00:15:47.920 --- 10.0.0.3 ping statistics --- 00:15:47.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.920 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:15:47.920 15:38:49 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:47.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:47.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:15:47.920 00:15:47.920 --- 10.0.0.1 ping statistics --- 00:15:47.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.920 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:15:47.920 15:38:49 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:47.920 15:38:49 -- nvmf/common.sh@422 -- # return 0 00:15:47.920 15:38:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:47.920 15:38:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:47.920 15:38:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:47.920 15:38:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:47.920 15:38:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:47.920 15:38:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:47.920 15:38:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:47.920 15:38:49 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:15:47.920 15:38:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:47.920 15:38:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:47.920 15:38:49 -- common/autotest_common.sh@10 -- # set +x 00:15:47.920 15:38:49 -- nvmf/common.sh@470 -- # nvmfpid=78233 00:15:47.920 15:38:49 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:47.920 15:38:49 -- nvmf/common.sh@471 -- # waitforlisten 78233 00:15:47.920 15:38:49 -- common/autotest_common.sh@817 -- # '[' -z 78233 ']' 00:15:47.920 15:38:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.920 15:38:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:47.920 15:38:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.920 15:38:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:47.920 15:38:49 -- common/autotest_common.sh@10 -- # set +x 00:15:47.920 [2024-04-17 15:38:49.216108] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:15:47.920 [2024-04-17 15:38:49.216241] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:47.920 [2024-04-17 15:38:49.360490] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:48.179 [2024-04-17 15:38:49.528477] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:48.179 [2024-04-17 15:38:49.528556] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:48.179 [2024-04-17 15:38:49.528571] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:48.179 [2024-04-17 15:38:49.528583] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:48.179 [2024-04-17 15:38:49.528592] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:48.179 [2024-04-17 15:38:49.528716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:48.179 [2024-04-17 15:38:49.528730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.114 15:38:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:49.114 15:38:50 -- common/autotest_common.sh@850 -- # return 0 00:15:49.114 15:38:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:49.114 15:38:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:49.114 15:38:50 -- common/autotest_common.sh@10 -- # set +x 00:15:49.114 15:38:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:49.114 15:38:50 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:49.114 15:38:50 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:49.114 [2024-04-17 15:38:50.513825] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:49.114 15:38:50 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:49.682 Malloc0 00:15:49.682 15:38:50 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:49.682 15:38:51 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:49.941 15:38:51 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:50.200 [2024-04-17 15:38:51.542069] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:50.200 15:38:51 -- host/timeout.sh@32 -- # bdevperf_pid=78289 00:15:50.200 15:38:51 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:15:50.200 15:38:51 -- host/timeout.sh@34 -- # waitforlisten 78289 /var/tmp/bdevperf.sock 00:15:50.200 15:38:51 -- common/autotest_common.sh@817 -- # '[' -z 78289 ']' 00:15:50.200 15:38:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:50.200 15:38:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:50.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:50.200 15:38:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:50.200 15:38:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:50.200 15:38:51 -- common/autotest_common.sh@10 -- # set +x 00:15:50.200 [2024-04-17 15:38:51.623676] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:15:50.200 [2024-04-17 15:38:51.623831] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78289 ] 00:15:50.459 [2024-04-17 15:38:51.766702] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.459 [2024-04-17 15:38:51.898087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:51.402 15:38:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:51.402 15:38:52 -- common/autotest_common.sh@850 -- # return 0 00:15:51.402 15:38:52 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:15:51.402 15:38:52 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:15:51.661 NVMe0n1 00:15:51.661 15:38:53 -- host/timeout.sh@51 -- # rpc_pid=78311 00:15:51.661 15:38:53 -- host/timeout.sh@53 -- # sleep 1 00:15:51.661 15:38:53 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:51.919 Running I/O for 10 seconds... 00:15:52.870 15:38:54 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:53.131 [2024-04-17 15:38:54.376941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:53.131 [2024-04-17 15:38:54.377016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.131 [2024-04-17 15:38:54.377045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:64416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:53.131 [2024-04-17 15:38:54.377057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.131 [2024-04-17 15:38:54.377071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:53.131 [2024-04-17 15:38:54.377082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.131 [2024-04-17 15:38:54.377095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:53.131 [2024-04-17 15:38:54.377105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.131 [2024-04-17 15:38:54.377118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:53.131 [2024-04-17 15:38:54.377129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.131 [2024-04-17 15:38:54.377142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:64448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:53.131 [2024-04-17 15:38:54.377153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.131 [2024-04-17 15:38:54.377166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:64456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:53.131 [2024-04-17 15:38:54.377177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.131 [2024-04-17 15:38:54.377190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:53.131 [2024-04-17 15:38:54.377200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.131 [2024-04-17 15:38:54.377213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:64472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:53.132 [2024-04-17 15:38:54.377224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.132 [2024-04-17 15:38:54.377237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:53.132 [2024-04-17 15:38:54.377248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.132 [2024-04-17 15:38:54.377261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:53.132 [2024-04-17 15:38:54.377272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.132 [2024-04-17 15:38:54.377284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:64496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:53.132 [2024-04-17 15:38:54.377295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.132 [2024-04-17 15:38:54.377308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:53.132 [2024-04-17 15:38:54.377319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.132 [2024-04-17 15:38:54.377343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:53.132 [2024-04-17 15:38:54.377353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.132 [2024-04-17 15:38:54.377367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:53.132 [2024-04-17 15:38:54.377377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.132 [2024-04-17 15:38:54.377390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:53.132 [2024-04-17 15:38:54.377401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.132 [2024-04-17 15:38:54.377413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:64536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:53.132 [2024-04-17 15:38:54.377423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.132 [2024-04-17 15:38:54.377439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:64544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:53.132 [2024-04-17 15:38:54.377450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.132 [2024-04-17 15:38:54.377462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:53.132 [2024-04-17 15:38:54.377473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.132 [2024-04-17 15:38:54.377486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:64560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:53.132 [2024-04-17 15:38:54.377496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.132 [2024-04-17 15:38:54.377509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:64568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:53.132 [2024-04-17 15:38:54.377520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.132 [2024-04-17 15:38:54.377533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:53.132 [2024-04-17 15:38:54.377543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.132 [2024-04-17 15:38:54.377556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:64584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:53.132 [2024-04-17 15:38:54.377566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.132 [2024-04-17 15:38:54.377579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:64592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:53.132 [2024-04-17 15:38:54.377589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.132 [2024-04-17 15:38:54.377602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:53.132 [2024-04-17 15:38:54.377612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.132 [2024-04-17 15:38:54.377626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:64608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:53.132 [2024-04-17 15:38:54.377637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.132 [2024-04-17 15:38:54.377651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:53.132 [2024-04-17 15:38:54.377661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.132 [2024-04-17 15:38:54.377674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:64624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:53.132 [2024-04-17 15:38:54.377684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.132 [2024-04-17 15:38:54.377697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:64632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:53.132 [2024-04-17 15:38:54.377707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.132 [2024-04-17 15:38:54.377720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:64640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:53.132 [2024-04-17 15:38:54.377730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.132 [2024-04-17 15:38:54.377743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:64648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:53.132 [2024-04-17 15:38:54.377765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.132 [2024-04-17 15:38:54.377780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:63656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.132 [2024-04-17 15:38:54.377791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.132 [2024-04-17 15:38:54.377803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.132 [2024-04-17 15:38:54.377814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.132 [2024-04-17 15:38:54.377828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:63672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.132 [2024-04-17 15:38:54.377838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.132 [2024-04-17 15:38:54.377851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:63680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.132 [2024-04-17 15:38:54.377862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.132 [2024-04-17 15:38:54.377874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:63688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.132 [2024-04-17 15:38:54.377885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.132 [2024-04-17 15:38:54.377897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.132 [2024-04-17 15:38:54.377907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.132 [2024-04-17 15:38:54.377919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:63704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.132 [2024-04-17 15:38:54.377930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.132 [2024-04-17 15:38:54.377944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.132 [2024-04-17 15:38:54.377955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.132 [2024-04-17 15:38:54.377968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.132 [2024-04-17 15:38:54.377979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.132 [2024-04-17 15:38:54.377991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.132 [2024-04-17 15:38:54.378003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.132 [2024-04-17 15:38:54.378016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:63736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.132 [2024-04-17 15:38:54.378026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.132 [2024-04-17 15:38:54.378039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:63744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.132 [2024-04-17 15:38:54.378049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.132 [2024-04-17 15:38:54.378062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.132 [2024-04-17 15:38:54.378072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.132 [2024-04-17 15:38:54.378085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.132 [2024-04-17 15:38:54.378097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.132 [2024-04-17 15:38:54.378110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:63768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.132 [2024-04-17 15:38:54.378120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.132 [2024-04-17 15:38:54.378133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:53.132 [2024-04-17 15:38:54.378143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.132 [2024-04-17 15:38:54.378156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:64664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:53.132 [2024-04-17 15:38:54.378166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.133 [2024-04-17 15:38:54.378178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:63776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.133 [2024-04-17 15:38:54.378189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.133 [2024-04-17 15:38:54.378202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:63784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.133 [2024-04-17 15:38:54.378213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.133 [2024-04-17 15:38:54.378226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:63792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.133 [2024-04-17 15:38:54.378237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.133 [2024-04-17 15:38:54.378250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:63800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.133 [2024-04-17 15:38:54.378261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.133 [2024-04-17 15:38:54.378274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:63808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.133 [2024-04-17 15:38:54.378285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.133 [2024-04-17 15:38:54.378297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:63816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.133 [2024-04-17 15:38:54.378308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.133 [2024-04-17 15:38:54.378320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:63824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.133 [2024-04-17 15:38:54.378331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.133 [2024-04-17 15:38:54.378345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:64672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:53.133 [2024-04-17 15:38:54.378356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.133 [2024-04-17 15:38:54.378369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:63832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.133 [2024-04-17 15:38:54.378380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.133 [2024-04-17 15:38:54.378392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:63840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.133 [2024-04-17 15:38:54.378403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.133 [2024-04-17 15:38:54.378415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.133 [2024-04-17 15:38:54.378426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.133 [2024-04-17 15:38:54.378439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.133 [2024-04-17 15:38:54.378449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.133 [2024-04-17 15:38:54.378461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.133 [2024-04-17 15:38:54.378471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.133 [2024-04-17 15:38:54.378484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:63872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.133 [2024-04-17 15:38:54.378495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.133 [2024-04-17 15:38:54.378508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.133 [2024-04-17 15:38:54.378519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.133 [2024-04-17 15:38:54.378532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.133 [2024-04-17 15:38:54.378542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.133 [2024-04-17 15:38:54.378554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:63896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.133 [2024-04-17 15:38:54.378565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.133 [2024-04-17 15:38:54.378578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:63904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.133 [2024-04-17 15:38:54.378588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.133 [2024-04-17 15:38:54.378601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.133 [2024-04-17 15:38:54.378612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.133 [2024-04-17 15:38:54.378625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:63920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.133 [2024-04-17 15:38:54.378636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.133 [2024-04-17 15:38:54.378649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.133 [2024-04-17 15:38:54.378660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.133 [2024-04-17 15:38:54.378672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:63936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.133 [2024-04-17 15:38:54.378683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.133 [2024-04-17 15:38:54.378696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:63944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.133 [2024-04-17 15:38:54.378706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.133 [2024-04-17 15:38:54.378719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:63952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.133 [2024-04-17 15:38:54.378730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.133 [2024-04-17 15:38:54.378743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.133 [2024-04-17 15:38:54.378764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.133 [2024-04-17 15:38:54.378779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:63968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.133 [2024-04-17 15:38:54.378790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.133 [2024-04-17 15:38:54.378802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.133 [2024-04-17 15:38:54.378813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.133 [2024-04-17 15:38:54.378826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.133 [2024-04-17 15:38:54.378838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.133 [2024-04-17 15:38:54.378851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.133 [2024-04-17 15:38:54.378861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.133 [2024-04-17 15:38:54.378874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:64000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.133 [2024-04-17 15:38:54.378884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.133 [2024-04-17 15:38:54.378897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:64008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.133 [2024-04-17 15:38:54.378908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.133 [2024-04-17 15:38:54.378921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:64016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.133 [2024-04-17 15:38:54.378931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.133 [2024-04-17 15:38:54.378944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:64024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.133 [2024-04-17 15:38:54.378955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.133 [2024-04-17 15:38:54.378968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:64032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.133 [2024-04-17 15:38:54.378979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.133 [2024-04-17 15:38:54.378992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:64040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.133 [2024-04-17 15:38:54.379003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.133 [2024-04-17 15:38:54.379015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:64048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.133 [2024-04-17 15:38:54.379025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.133 [2024-04-17 15:38:54.379038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:64056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.133 [2024-04-17 15:38:54.379048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.133 [2024-04-17 15:38:54.379071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:64064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.133 [2024-04-17 15:38:54.379081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.133 [2024-04-17 15:38:54.379094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:64072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.134 [2024-04-17 15:38:54.379104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.134 [2024-04-17 15:38:54.379127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.134 [2024-04-17 15:38:54.379138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.134 [2024-04-17 15:38:54.379152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:64088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.134 [2024-04-17 15:38:54.379163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.134 [2024-04-17 15:38:54.379175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:64096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.134 [2024-04-17 15:38:54.379185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.134 [2024-04-17 15:38:54.379198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:64104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.134 [2024-04-17 15:38:54.379208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.134 [2024-04-17 15:38:54.379221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:64112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.134 [2024-04-17 15:38:54.379231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.134 [2024-04-17 15:38:54.379244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:64120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.134 [2024-04-17 15:38:54.379254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.134 [2024-04-17 15:38:54.379268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:64128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.134 [2024-04-17 15:38:54.379278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.134 [2024-04-17 15:38:54.379291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:64136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.134 [2024-04-17 15:38:54.379302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.134 [2024-04-17 15:38:54.379315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.134 [2024-04-17 15:38:54.379325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.134 [2024-04-17 15:38:54.379338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:64152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.134 [2024-04-17 15:38:54.379349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.134 [2024-04-17 15:38:54.379362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:64160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.134 [2024-04-17 15:38:54.379372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.134 [2024-04-17 15:38:54.379385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:64168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.134 [2024-04-17 15:38:54.379395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.134 [2024-04-17 15:38:54.379408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:64176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.134 [2024-04-17 15:38:54.379418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.134 [2024-04-17 15:38:54.379431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:64184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.134 [2024-04-17 15:38:54.379442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.134 [2024-04-17 15:38:54.379455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:64192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.134 [2024-04-17 15:38:54.379466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.134 [2024-04-17 15:38:54.379478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:64200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.134 [2024-04-17 15:38:54.379488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.134 [2024-04-17 15:38:54.379501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:64208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.134 [2024-04-17 15:38:54.379521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.134 [2024-04-17 15:38:54.379534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:64216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.134 [2024-04-17 15:38:54.379544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.134 [2024-04-17 15:38:54.379557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:64224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.134 [2024-04-17 15:38:54.379567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.134 [2024-04-17 15:38:54.379580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:64232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.134 [2024-04-17 15:38:54.379591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.134 [2024-04-17 15:38:54.379603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:64240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.134 [2024-04-17 15:38:54.379614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.134 [2024-04-17 15:38:54.379626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:64248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.134 [2024-04-17 15:38:54.379636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.134 [2024-04-17 15:38:54.379649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:64256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.134 [2024-04-17 15:38:54.379659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.134 [2024-04-17 15:38:54.379671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:64264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.134 [2024-04-17 15:38:54.379692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.134 [2024-04-17 15:38:54.379705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:64272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.134 [2024-04-17 15:38:54.379716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.134 [2024-04-17 15:38:54.379728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.134 [2024-04-17 15:38:54.379738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.134 [2024-04-17 15:38:54.379761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:64288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.134 [2024-04-17 15:38:54.379773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.134 [2024-04-17 15:38:54.379786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:64296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.134 [2024-04-17 15:38:54.379797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.134 [2024-04-17 15:38:54.379809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.134 [2024-04-17 15:38:54.379819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.134 [2024-04-17 15:38:54.379832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:64312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.134 [2024-04-17 15:38:54.379843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.134 [2024-04-17 15:38:54.379855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:64320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.134 [2024-04-17 15:38:54.379865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.134 [2024-04-17 15:38:54.379877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.134 [2024-04-17 15:38:54.379888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.134 [2024-04-17 15:38:54.379900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:64336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.134 [2024-04-17 15:38:54.379917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.134 [2024-04-17 15:38:54.379930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:64344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.134 [2024-04-17 15:38:54.379940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.134 [2024-04-17 15:38:54.379952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:64352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.134 [2024-04-17 15:38:54.379963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.134 [2024-04-17 15:38:54.379975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:64360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.134 [2024-04-17 15:38:54.379985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.134 [2024-04-17 15:38:54.379998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:64368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.135 [2024-04-17 15:38:54.380007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.135 [2024-04-17 15:38:54.380020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.135 [2024-04-17 15:38:54.380030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.135 [2024-04-17 15:38:54.380043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:64384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.135 [2024-04-17 15:38:54.380054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.135 [2024-04-17 15:38:54.380067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:64392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:53.135 [2024-04-17 15:38:54.380082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.135 [2024-04-17 15:38:54.380095] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1009830 is same with the state(5) to be set 00:15:53.135 [2024-04-17 15:38:54.380109] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:53.135 [2024-04-17 15:38:54.380118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:53.135 [2024-04-17 15:38:54.380127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64400 len:8 PRP1 0x0 PRP2 0x0 00:15:53.135 [2024-04-17 15:38:54.380137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.135 [2024-04-17 15:38:54.380216] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1009830 was disconnected and freed. reset controller. 00:15:53.135 [2024-04-17 15:38:54.380490] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:53.135 [2024-04-17 15:38:54.380579] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa1dc0 (9): Bad file descriptor 00:15:53.135 [2024-04-17 15:38:54.380701] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:53.135 [2024-04-17 15:38:54.380786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:53.135 [2024-04-17 15:38:54.380835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:53.135 [2024-04-17 15:38:54.380852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa1dc0 with addr=10.0.0.2, port=4420 00:15:53.135 [2024-04-17 15:38:54.380864] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa1dc0 is same with the state(5) to be set 00:15:53.135 [2024-04-17 15:38:54.380885] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa1dc0 (9): Bad file descriptor 00:15:53.135 [2024-04-17 15:38:54.380903] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:15:53.135 [2024-04-17 15:38:54.380914] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:15:53.135 [2024-04-17 15:38:54.380925] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:53.135 [2024-04-17 15:38:54.380953] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:53.135 [2024-04-17 15:38:54.380965] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:53.135 15:38:54 -- host/timeout.sh@56 -- # sleep 2 00:15:55.036 [2024-04-17 15:38:56.381168] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:55.036 [2024-04-17 15:38:56.381303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:55.036 [2024-04-17 15:38:56.381349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:55.036 [2024-04-17 15:38:56.381367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa1dc0 with addr=10.0.0.2, port=4420 00:15:55.036 [2024-04-17 15:38:56.381384] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa1dc0 is same with the state(5) to be set 00:15:55.036 [2024-04-17 15:38:56.381419] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa1dc0 (9): Bad file descriptor 00:15:55.036 [2024-04-17 15:38:56.381455] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:15:55.036 [2024-04-17 15:38:56.381467] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:15:55.036 [2024-04-17 15:38:56.381479] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:55.036 [2024-04-17 15:38:56.381511] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:55.036 [2024-04-17 15:38:56.381524] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:55.036 15:38:56 -- host/timeout.sh@57 -- # get_controller 00:15:55.036 15:38:56 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:55.036 15:38:56 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:15:55.294 15:38:56 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:15:55.294 15:38:56 -- host/timeout.sh@58 -- # get_bdev 00:15:55.294 15:38:56 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:15:55.294 15:38:56 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:15:55.554 15:38:56 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:15:55.554 15:38:56 -- host/timeout.sh@61 -- # sleep 5 00:15:57.458 [2024-04-17 15:38:58.381773] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:57.458 [2024-04-17 15:38:58.381896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:57.458 [2024-04-17 15:38:58.381946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:57.458 [2024-04-17 15:38:58.381964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa1dc0 with addr=10.0.0.2, port=4420 00:15:57.458 [2024-04-17 15:38:58.381981] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa1dc0 is same with the state(5) to be set 00:15:57.458 [2024-04-17 15:38:58.382013] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa1dc0 (9): Bad file descriptor 00:15:57.458 [2024-04-17 15:38:58.382035] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:15:57.458 [2024-04-17 15:38:58.382045] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:15:57.458 [2024-04-17 15:38:58.382058] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:57.458 [2024-04-17 15:38:58.382106] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:57.458 [2024-04-17 15:38:58.382121] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:59.363 [2024-04-17 15:39:00.382222] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:00.034 00:16:00.034 Latency(us) 00:16:00.034 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:00.034 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:00.034 Verification LBA range: start 0x0 length 0x4000 00:16:00.034 NVMe0n1 : 8.18 972.65 3.80 15.65 0.00 129305.96 3961.95 7015926.69 00:16:00.034 =================================================================================================================== 00:16:00.034 Total : 972.65 3.80 15.65 0.00 129305.96 3961.95 7015926.69 00:16:00.034 0 00:16:00.600 15:39:01 -- host/timeout.sh@62 -- # get_controller 00:16:00.600 15:39:01 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:00.600 15:39:01 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:16:00.858 15:39:02 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:16:00.858 15:39:02 -- host/timeout.sh@63 -- # get_bdev 00:16:00.858 15:39:02 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:16:00.858 15:39:02 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:16:01.117 15:39:02 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:16:01.117 15:39:02 -- host/timeout.sh@65 -- # wait 78311 00:16:01.117 15:39:02 -- host/timeout.sh@67 -- # killprocess 78289 00:16:01.117 15:39:02 -- common/autotest_common.sh@936 -- # '[' -z 78289 ']' 00:16:01.117 15:39:02 -- common/autotest_common.sh@940 -- # kill -0 78289 00:16:01.117 15:39:02 -- common/autotest_common.sh@941 -- # uname 00:16:01.117 15:39:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:01.117 15:39:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78289 00:16:01.117 15:39:02 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:01.117 15:39:02 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:01.117 killing process with pid 78289 00:16:01.117 15:39:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78289' 00:16:01.117 15:39:02 -- common/autotest_common.sh@955 -- # kill 78289 00:16:01.117 Received shutdown signal, test time was about 9.274747 seconds 00:16:01.117 00:16:01.117 Latency(us) 00:16:01.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:01.117 =================================================================================================================== 00:16:01.117 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:01.117 15:39:02 -- common/autotest_common.sh@960 -- # wait 78289 00:16:01.684 15:39:02 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:01.684 [2024-04-17 15:39:03.027971] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:01.684 15:39:03 -- host/timeout.sh@74 -- # bdevperf_pid=78434 00:16:01.684 15:39:03 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:16:01.684 15:39:03 -- host/timeout.sh@76 -- # waitforlisten 78434 /var/tmp/bdevperf.sock 00:16:01.684 15:39:03 -- common/autotest_common.sh@817 -- # '[' -z 78434 ']' 00:16:01.684 15:39:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:01.684 15:39:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:01.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:01.684 15:39:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:01.684 15:39:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:01.684 15:39:03 -- common/autotest_common.sh@10 -- # set +x 00:16:01.684 [2024-04-17 15:39:03.100932] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:16:01.684 [2024-04-17 15:39:03.101031] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78434 ] 00:16:01.943 [2024-04-17 15:39:03.237457] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.943 [2024-04-17 15:39:03.380077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:02.880 15:39:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:02.880 15:39:04 -- common/autotest_common.sh@850 -- # return 0 00:16:02.880 15:39:04 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:16:02.880 15:39:04 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:16:03.138 NVMe0n1 00:16:03.138 15:39:04 -- host/timeout.sh@84 -- # rpc_pid=78452 00:16:03.138 15:39:04 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:03.138 15:39:04 -- host/timeout.sh@86 -- # sleep 1 00:16:03.505 Running I/O for 10 seconds... 00:16:04.444 15:39:05 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:04.444 [2024-04-17 15:39:05.855954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:65696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.444 [2024-04-17 15:39:05.856018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.444 [2024-04-17 15:39:05.856044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.444 [2024-04-17 15:39:05.856056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.444 [2024-04-17 15:39:05.856068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.444 [2024-04-17 15:39:05.856079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.444 [2024-04-17 15:39:05.856091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.444 [2024-04-17 15:39:05.856101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.444 [2024-04-17 15:39:05.856112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.444 [2024-04-17 15:39:05.856122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.444 [2024-04-17 15:39:05.856140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.444 [2024-04-17 15:39:05.856150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.444 [2024-04-17 15:39:05.856161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.444 [2024-04-17 15:39:05.856170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.444 [2024-04-17 15:39:05.856181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.444 [2024-04-17 15:39:05.856191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.444 [2024-04-17 15:39:05.856202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.444 [2024-04-17 15:39:05.856212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.444 [2024-04-17 15:39:05.856223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.444 [2024-04-17 15:39:05.856232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.444 [2024-04-17 15:39:05.856243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.444 [2024-04-17 15:39:05.856252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.444 [2024-04-17 15:39:05.856263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.444 [2024-04-17 15:39:05.856273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.444 [2024-04-17 15:39:05.856284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.444 [2024-04-17 15:39:05.856294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.444 [2024-04-17 15:39:05.856312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.444 [2024-04-17 15:39:05.856322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.444 [2024-04-17 15:39:05.856334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.444 [2024-04-17 15:39:05.856343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.444 [2024-04-17 15:39:05.856354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.445 [2024-04-17 15:39:05.856364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.445 [2024-04-17 15:39:05.856376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.445 [2024-04-17 15:39:05.856385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.445 [2024-04-17 15:39:05.856398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:65952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.445 [2024-04-17 15:39:05.856408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.445 [2024-04-17 15:39:05.856420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.445 [2024-04-17 15:39:05.856430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.445 [2024-04-17 15:39:05.856442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.445 [2024-04-17 15:39:05.856451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.445 [2024-04-17 15:39:05.856463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.445 [2024-04-17 15:39:05.856472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.445 [2024-04-17 15:39:05.856483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.445 [2024-04-17 15:39:05.856492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.445 [2024-04-17 15:39:05.856503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.445 [2024-04-17 15:39:05.856512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.445 [2024-04-17 15:39:05.856523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:66000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.445 [2024-04-17 15:39:05.856532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.445 [2024-04-17 15:39:05.856543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:66008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.445 [2024-04-17 15:39:05.856553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.445 [2024-04-17 15:39:05.856564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:66016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.445 [2024-04-17 15:39:05.856574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.445 [2024-04-17 15:39:05.856585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:66024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.445 [2024-04-17 15:39:05.856595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.445 [2024-04-17 15:39:05.856606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:66032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.445 [2024-04-17 15:39:05.856615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.445 [2024-04-17 15:39:05.856626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.445 [2024-04-17 15:39:05.856635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.445 [2024-04-17 15:39:05.856646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:66048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.445 [2024-04-17 15:39:05.856656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.445 [2024-04-17 15:39:05.856666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:66056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.445 [2024-04-17 15:39:05.856676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.445 [2024-04-17 15:39:05.856687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:66064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.445 [2024-04-17 15:39:05.856698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.445 [2024-04-17 15:39:05.856710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:66072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.445 [2024-04-17 15:39:05.856719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.445 [2024-04-17 15:39:05.856731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:66080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.445 [2024-04-17 15:39:05.856741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.445 [2024-04-17 15:39:05.856766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.445 [2024-04-17 15:39:05.856780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.445 [2024-04-17 15:39:05.856792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:66096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.445 [2024-04-17 15:39:05.856802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.445 [2024-04-17 15:39:05.856813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.445 [2024-04-17 15:39:05.856823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.445 [2024-04-17 15:39:05.856834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:66112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.445 [2024-04-17 15:39:05.856844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.445 [2024-04-17 15:39:05.856857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.445 [2024-04-17 15:39:05.856867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.445 [2024-04-17 15:39:05.856878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.445 [2024-04-17 15:39:05.856888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.445 [2024-04-17 15:39:05.856900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:66136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.445 [2024-04-17 15:39:05.856910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.445 [2024-04-17 15:39:05.856921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:66144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.445 [2024-04-17 15:39:05.856931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.445 [2024-04-17 15:39:05.856942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.445 [2024-04-17 15:39:05.856952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.445 [2024-04-17 15:39:05.856962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:66160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.445 [2024-04-17 15:39:05.856971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.445 [2024-04-17 15:39:05.856982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.445 [2024-04-17 15:39:05.856992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.445 [2024-04-17 15:39:05.857002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:66176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.445 [2024-04-17 15:39:05.857012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.445 [2024-04-17 15:39:05.857023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.445 [2024-04-17 15:39:05.857033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.445 [2024-04-17 15:39:05.857045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:66192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.445 [2024-04-17 15:39:05.857054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.445 [2024-04-17 15:39:05.857066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:66200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.445 [2024-04-17 15:39:05.857075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.445 [2024-04-17 15:39:05.857085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:66208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.445 [2024-04-17 15:39:05.857095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.445 [2024-04-17 15:39:05.857106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.445 [2024-04-17 15:39:05.857116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.445 [2024-04-17 15:39:05.857126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:66224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.445 [2024-04-17 15:39:05.857135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.445 [2024-04-17 15:39:05.857146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:66232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.445 [2024-04-17 15:39:05.857155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.445 [2024-04-17 15:39:05.857166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.445 [2024-04-17 15:39:05.857176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.445 [2024-04-17 15:39:05.857187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.445 [2024-04-17 15:39:05.857196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.446 [2024-04-17 15:39:05.857207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.446 [2024-04-17 15:39:05.857217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.446 [2024-04-17 15:39:05.857228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:66264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.446 [2024-04-17 15:39:05.857237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.446 [2024-04-17 15:39:05.857248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:66272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.446 [2024-04-17 15:39:05.857257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.446 [2024-04-17 15:39:05.857268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:66280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.446 [2024-04-17 15:39:05.857278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.446 [2024-04-17 15:39:05.857289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:66288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.446 [2024-04-17 15:39:05.857298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.446 [2024-04-17 15:39:05.857308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:66296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.446 [2024-04-17 15:39:05.857318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.446 [2024-04-17 15:39:05.857329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:66304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.446 [2024-04-17 15:39:05.857338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.446 [2024-04-17 15:39:05.857350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:66312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.446 [2024-04-17 15:39:05.857359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.446 [2024-04-17 15:39:05.857372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:66320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.446 [2024-04-17 15:39:05.857382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.446 [2024-04-17 15:39:05.857393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:66328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.446 [2024-04-17 15:39:05.857403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.446 [2024-04-17 15:39:05.857415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:66336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.446 [2024-04-17 15:39:05.857425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.446 [2024-04-17 15:39:05.857436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:66344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.446 [2024-04-17 15:39:05.857445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.446 [2024-04-17 15:39:05.857456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:66352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.446 [2024-04-17 15:39:05.857466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.446 [2024-04-17 15:39:05.857476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:66360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.446 [2024-04-17 15:39:05.857486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.446 [2024-04-17 15:39:05.857497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.446 [2024-04-17 15:39:05.857506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.446 [2024-04-17 15:39:05.857517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:66376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.446 [2024-04-17 15:39:05.857526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.446 [2024-04-17 15:39:05.857537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:66384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.446 [2024-04-17 15:39:05.857548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.446 [2024-04-17 15:39:05.857559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:66392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.446 [2024-04-17 15:39:05.857569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.446 [2024-04-17 15:39:05.857581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:66400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.446 [2024-04-17 15:39:05.857590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.446 [2024-04-17 15:39:05.857601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.446 [2024-04-17 15:39:05.857611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.446 [2024-04-17 15:39:05.857622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.446 [2024-04-17 15:39:05.857632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.446 [2024-04-17 15:39:05.857644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.446 [2024-04-17 15:39:05.857653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.446 [2024-04-17 15:39:05.857664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.446 [2024-04-17 15:39:05.857674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.446 [2024-04-17 15:39:05.857684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:66440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.446 [2024-04-17 15:39:05.857695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.446 [2024-04-17 15:39:05.857707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:66448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.446 [2024-04-17 15:39:05.857717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.446 [2024-04-17 15:39:05.857728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:66456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.446 [2024-04-17 15:39:05.857738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.446 [2024-04-17 15:39:05.857759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.446 [2024-04-17 15:39:05.857772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.446 [2024-04-17 15:39:05.857784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:66472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.446 [2024-04-17 15:39:05.857794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.446 [2024-04-17 15:39:05.857805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.446 [2024-04-17 15:39:05.857815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.446 [2024-04-17 15:39:05.857826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.446 [2024-04-17 15:39:05.857836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.446 [2024-04-17 15:39:05.857847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.446 [2024-04-17 15:39:05.857857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.446 [2024-04-17 15:39:05.857868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:66504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.446 [2024-04-17 15:39:05.857878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.446 [2024-04-17 15:39:05.857889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.446 [2024-04-17 15:39:05.857899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.446 [2024-04-17 15:39:05.857912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.446 [2024-04-17 15:39:05.857922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.446 [2024-04-17 15:39:05.857933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.446 [2024-04-17 15:39:05.857943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.446 [2024-04-17 15:39:05.857954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.446 [2024-04-17 15:39:05.857964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.446 [2024-04-17 15:39:05.857975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:66544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.446 [2024-04-17 15:39:05.857985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.446 [2024-04-17 15:39:05.857996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.446 [2024-04-17 15:39:05.858005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.446 [2024-04-17 15:39:05.858016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.446 [2024-04-17 15:39:05.858025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.446 [2024-04-17 15:39:05.858036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.446 [2024-04-17 15:39:05.858047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.447 [2024-04-17 15:39:05.858058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:66576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.447 [2024-04-17 15:39:05.858068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.447 [2024-04-17 15:39:05.858080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.447 [2024-04-17 15:39:05.858089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.447 [2024-04-17 15:39:05.858101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:66592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.447 [2024-04-17 15:39:05.858110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.447 [2024-04-17 15:39:05.858121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.447 [2024-04-17 15:39:05.858131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.447 [2024-04-17 15:39:05.858142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.447 [2024-04-17 15:39:05.858151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.447 [2024-04-17 15:39:05.858162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.447 [2024-04-17 15:39:05.858172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.447 [2024-04-17 15:39:05.858184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:66624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.447 [2024-04-17 15:39:05.858193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.447 [2024-04-17 15:39:05.858205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.447 [2024-04-17 15:39:05.858215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.447 [2024-04-17 15:39:05.858237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:66640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.447 [2024-04-17 15:39:05.858247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.447 [2024-04-17 15:39:05.858259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.447 [2024-04-17 15:39:05.858268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.447 [2024-04-17 15:39:05.858279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.447 [2024-04-17 15:39:05.858288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.447 [2024-04-17 15:39:05.858299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.447 [2024-04-17 15:39:05.858308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.447 [2024-04-17 15:39:05.858319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:66672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.447 [2024-04-17 15:39:05.858328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.447 [2024-04-17 15:39:05.858339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.447 [2024-04-17 15:39:05.858349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.447 [2024-04-17 15:39:05.858360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.447 [2024-04-17 15:39:05.858369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.447 [2024-04-17 15:39:05.858380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.447 [2024-04-17 15:39:05.858391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.447 [2024-04-17 15:39:05.858402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.447 [2024-04-17 15:39:05.858412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.447 [2024-04-17 15:39:05.858423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.447 [2024-04-17 15:39:05.858434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.447 [2024-04-17 15:39:05.858445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:65720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.447 [2024-04-17 15:39:05.858455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.447 [2024-04-17 15:39:05.858466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.447 [2024-04-17 15:39:05.858476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.447 [2024-04-17 15:39:05.858487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:65736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.447 [2024-04-17 15:39:05.858497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.447 [2024-04-17 15:39:05.858508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:65744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.447 [2024-04-17 15:39:05.858517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.447 [2024-04-17 15:39:05.858527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:65752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.447 [2024-04-17 15:39:05.858537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.447 [2024-04-17 15:39:05.858548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:65760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.447 [2024-04-17 15:39:05.858557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.447 [2024-04-17 15:39:05.858568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:65768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.447 [2024-04-17 15:39:05.858577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.447 [2024-04-17 15:39:05.858588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:65776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.447 [2024-04-17 15:39:05.858598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.447 [2024-04-17 15:39:05.858608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:65784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.447 [2024-04-17 15:39:05.858617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.447 [2024-04-17 15:39:05.858629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:65792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.447 [2024-04-17 15:39:05.858638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.447 [2024-04-17 15:39:05.858649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:65800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.447 [2024-04-17 15:39:05.858658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.447 [2024-04-17 15:39:05.858669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:65808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.447 [2024-04-17 15:39:05.858678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.447 [2024-04-17 15:39:05.858690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:65816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.447 [2024-04-17 15:39:05.858700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.447 [2024-04-17 15:39:05.858711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:66704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.447 [2024-04-17 15:39:05.858721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.447 [2024-04-17 15:39:05.858774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:04.447 [2024-04-17 15:39:05.858787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:04.447 [2024-04-17 15:39:05.858796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66712 len:8 PRP1 0x0 PRP2 0x0 00:16:04.447 [2024-04-17 15:39:05.858805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.447 [2024-04-17 15:39:05.858873] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x182b950 was disconnected and freed. reset controller. 00:16:04.447 [2024-04-17 15:39:05.858968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:04.447 [2024-04-17 15:39:05.858984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.447 [2024-04-17 15:39:05.858995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:04.447 [2024-04-17 15:39:05.859005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.447 [2024-04-17 15:39:05.859015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:04.447 [2024-04-17 15:39:05.859024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.447 [2024-04-17 15:39:05.859035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:04.447 [2024-04-17 15:39:05.859044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.447 [2024-04-17 15:39:05.859053] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c3dc0 is same with the state(5) to be set 00:16:04.447 [2024-04-17 15:39:05.859282] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:04.447 [2024-04-17 15:39:05.859307] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c3dc0 (9): Bad file descriptor 00:16:04.447 [2024-04-17 15:39:05.859425] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:04.448 [2024-04-17 15:39:05.859493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:04.448 [2024-04-17 15:39:05.859538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:04.448 [2024-04-17 15:39:05.859555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c3dc0 with addr=10.0.0.2, port=4420 00:16:04.448 [2024-04-17 15:39:05.859567] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c3dc0 is same with the state(5) to be set 00:16:04.448 [2024-04-17 15:39:05.859586] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c3dc0 (9): Bad file descriptor 00:16:04.448 [2024-04-17 15:39:05.859616] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:04.448 [2024-04-17 15:39:05.859628] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:04.448 [2024-04-17 15:39:05.859641] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:04.448 [2024-04-17 15:39:05.859661] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:04.448 [2024-04-17 15:39:05.859673] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:04.448 15:39:05 -- host/timeout.sh@90 -- # sleep 1 00:16:05.824 [2024-04-17 15:39:06.859831] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:05.824 [2024-04-17 15:39:06.859927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:05.824 [2024-04-17 15:39:06.859972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:05.824 [2024-04-17 15:39:06.859989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c3dc0 with addr=10.0.0.2, port=4420 00:16:05.824 [2024-04-17 15:39:06.860005] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c3dc0 is same with the state(5) to be set 00:16:05.824 [2024-04-17 15:39:06.860034] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c3dc0 (9): Bad file descriptor 00:16:05.824 [2024-04-17 15:39:06.860055] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:05.824 [2024-04-17 15:39:06.860066] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:05.824 [2024-04-17 15:39:06.860078] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:05.824 [2024-04-17 15:39:06.860108] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:05.824 [2024-04-17 15:39:06.860121] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:05.824 15:39:06 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:05.824 [2024-04-17 15:39:07.092584] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:05.824 15:39:07 -- host/timeout.sh@92 -- # wait 78452 00:16:06.757 [2024-04-17 15:39:07.874784] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:13.335 00:16:13.335 Latency(us) 00:16:13.335 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:13.335 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:13.335 Verification LBA range: start 0x0 length 0x4000 00:16:13.335 NVMe0n1 : 10.01 5712.94 22.32 0.00 0.00 22356.90 1258.59 3019898.88 00:16:13.335 =================================================================================================================== 00:16:13.335 Total : 5712.94 22.32 0.00 0.00 22356.90 1258.59 3019898.88 00:16:13.335 0 00:16:13.335 15:39:14 -- host/timeout.sh@97 -- # rpc_pid=78562 00:16:13.335 15:39:14 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:13.335 15:39:14 -- host/timeout.sh@98 -- # sleep 1 00:16:13.593 Running I/O for 10 seconds... 00:16:14.533 15:39:15 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:14.533 [2024-04-17 15:39:15.942282] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1bb80 is same with the state(5) to be set 00:16:14.533 [2024-04-17 15:39:15.942361] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1bb80 is same with the state(5) to be set 00:16:14.533 [2024-04-17 15:39:15.942374] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1bb80 is same with the state(5) to be set 00:16:14.533 [2024-04-17 15:39:15.942383] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1bb80 is same with the state(5) to be set 00:16:14.533 [2024-04-17 15:39:15.942392] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1bb80 is same with the state(5) to be set 00:16:14.533 [2024-04-17 15:39:15.942401] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1bb80 is same with the state(5) to be set 00:16:14.533 [2024-04-17 15:39:15.942410] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1bb80 is same with the state(5) to be set 00:16:14.533 [2024-04-17 15:39:15.942422] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1bb80 is same with the state(5) to be set 00:16:14.533 [2024-04-17 15:39:15.942430] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1bb80 is same with the state(5) to be set 00:16:14.533 [2024-04-17 15:39:15.942438] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1bb80 is same with the state(5) to be set 00:16:14.533 [2024-04-17 15:39:15.942447] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1bb80 is same with the state(5) to be set 00:16:14.533 [2024-04-17 15:39:15.942455] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1bb80 is same with the state(5) to be set 00:16:14.533 [2024-04-17 15:39:15.942463] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1bb80 is same with the state(5) to be set 00:16:14.533 [2024-04-17 15:39:15.942472] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1bb80 is same with the state(5) to be set 00:16:14.533 [2024-04-17 15:39:15.942480] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1bb80 is same with the state(5) to be set 00:16:14.533 [2024-04-17 15:39:15.942548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:69304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.533 [2024-04-17 15:39:15.942884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.533 [2024-04-17 15:39:15.942921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:69312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.533 [2024-04-17 15:39:15.942933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.534 [2024-04-17 15:39:15.942945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:69320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.534 [2024-04-17 15:39:15.942954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.534 [2024-04-17 15:39:15.942966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:69328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.534 [2024-04-17 15:39:15.942976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.534 [2024-04-17 15:39:15.942988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.534 [2024-04-17 15:39:15.942997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.534 [2024-04-17 15:39:15.943009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:69344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.534 [2024-04-17 15:39:15.943019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.534 [2024-04-17 15:39:15.943030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:69352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.534 [2024-04-17 15:39:15.943040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.534 [2024-04-17 15:39:15.943051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:69360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.534 [2024-04-17 15:39:15.943061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.534 [2024-04-17 15:39:15.943073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.534 [2024-04-17 15:39:15.943082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.534 [2024-04-17 15:39:15.943094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.534 [2024-04-17 15:39:15.943113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.534 [2024-04-17 15:39:15.943124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:69384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.534 [2024-04-17 15:39:15.943147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.534 [2024-04-17 15:39:15.943159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:69392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.534 [2024-04-17 15:39:15.943170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.534 [2024-04-17 15:39:15.943181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:69400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.534 [2024-04-17 15:39:15.943192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.534 [2024-04-17 15:39:15.943205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:69408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.534 [2024-04-17 15:39:15.943214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.534 [2024-04-17 15:39:15.943226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:69416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.534 [2024-04-17 15:39:15.943236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.534 [2024-04-17 15:39:15.943248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:69424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.534 [2024-04-17 15:39:15.943258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.534 [2024-04-17 15:39:15.943270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:70072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.534 [2024-04-17 15:39:15.943280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.534 [2024-04-17 15:39:15.943291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:70080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.534 [2024-04-17 15:39:15.943301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.534 [2024-04-17 15:39:15.943312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:70088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.534 [2024-04-17 15:39:15.943322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.534 [2024-04-17 15:39:15.943333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:70096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.534 [2024-04-17 15:39:15.943343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.534 [2024-04-17 15:39:15.943354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:70104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.534 [2024-04-17 15:39:15.943363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.534 [2024-04-17 15:39:15.943374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:70112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.534 [2024-04-17 15:39:15.943384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.534 [2024-04-17 15:39:15.943395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:70120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.534 [2024-04-17 15:39:15.943405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.534 [2024-04-17 15:39:15.943416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:70128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.534 [2024-04-17 15:39:15.943426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.534 [2024-04-17 15:39:15.943437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:69432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.534 [2024-04-17 15:39:15.943447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.534 [2024-04-17 15:39:15.943459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:69440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.534 [2024-04-17 15:39:15.943469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.534 [2024-04-17 15:39:15.943482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:69448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.534 [2024-04-17 15:39:15.943492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.534 [2024-04-17 15:39:15.943504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:69456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.534 [2024-04-17 15:39:15.943514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.534 [2024-04-17 15:39:15.943526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:69464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.534 [2024-04-17 15:39:15.943536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.534 [2024-04-17 15:39:15.943548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:69472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.534 [2024-04-17 15:39:15.943558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.534 [2024-04-17 15:39:15.943569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:69480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.534 [2024-04-17 15:39:15.943579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.534 [2024-04-17 15:39:15.943590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:69488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.534 [2024-04-17 15:39:15.943600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.534 [2024-04-17 15:39:15.943612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:69496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.534 [2024-04-17 15:39:15.943622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.534 [2024-04-17 15:39:15.943633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:69504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.534 [2024-04-17 15:39:15.943643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.534 [2024-04-17 15:39:15.943655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:69512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.534 [2024-04-17 15:39:15.943664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.534 [2024-04-17 15:39:15.943675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:69520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.534 [2024-04-17 15:39:15.943684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.534 [2024-04-17 15:39:15.943695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:69528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.534 [2024-04-17 15:39:15.943705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.534 [2024-04-17 15:39:15.943716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:69536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.535 [2024-04-17 15:39:15.943726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.535 [2024-04-17 15:39:15.943737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:69544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.535 [2024-04-17 15:39:15.943748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.535 [2024-04-17 15:39:15.943773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:69552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.535 [2024-04-17 15:39:15.943786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.535 [2024-04-17 15:39:15.943799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:69560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.535 [2024-04-17 15:39:15.943808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.535 [2024-04-17 15:39:15.943820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:69568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.535 [2024-04-17 15:39:15.943829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.535 [2024-04-17 15:39:15.943841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:69576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.535 [2024-04-17 15:39:15.943850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.535 [2024-04-17 15:39:15.943862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:69584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.535 [2024-04-17 15:39:15.943872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.535 [2024-04-17 15:39:15.943883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:69592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.535 [2024-04-17 15:39:15.943893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.535 [2024-04-17 15:39:15.943905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:69600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.535 [2024-04-17 15:39:15.943924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.535 [2024-04-17 15:39:15.943935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:69608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.535 [2024-04-17 15:39:15.943944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.535 [2024-04-17 15:39:15.943956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:69616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.535 [2024-04-17 15:39:15.943966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.535 [2024-04-17 15:39:15.943977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:70136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.535 [2024-04-17 15:39:15.943986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.535 [2024-04-17 15:39:15.943997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.535 [2024-04-17 15:39:15.944007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.535 [2024-04-17 15:39:15.944018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:70152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.535 [2024-04-17 15:39:15.944028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.535 [2024-04-17 15:39:15.944039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:70160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.535 [2024-04-17 15:39:15.944049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.535 [2024-04-17 15:39:15.944060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:70168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.535 [2024-04-17 15:39:15.944069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.535 [2024-04-17 15:39:15.944080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:70176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.535 [2024-04-17 15:39:15.944089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.535 [2024-04-17 15:39:15.944100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:70184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.535 [2024-04-17 15:39:15.944109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.535 [2024-04-17 15:39:15.944121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:70192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.535 [2024-04-17 15:39:15.944131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.535 [2024-04-17 15:39:15.944142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:69624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.535 [2024-04-17 15:39:15.944152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.535 [2024-04-17 15:39:15.944163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:69632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.535 [2024-04-17 15:39:15.944172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.535 [2024-04-17 15:39:15.944183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:69640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.535 [2024-04-17 15:39:15.944193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.535 [2024-04-17 15:39:15.944205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:69648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.535 [2024-04-17 15:39:15.944223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.535 [2024-04-17 15:39:15.944234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:69656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.535 [2024-04-17 15:39:15.944244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.535 [2024-04-17 15:39:15.944255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:69664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.535 [2024-04-17 15:39:15.944857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.535 [2024-04-17 15:39:15.944889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:69672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.535 [2024-04-17 15:39:15.944901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.535 [2024-04-17 15:39:15.944912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:69680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.535 [2024-04-17 15:39:15.944922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.535 [2024-04-17 15:39:15.944934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.535 [2024-04-17 15:39:15.944944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.535 [2024-04-17 15:39:15.944956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:69696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.535 [2024-04-17 15:39:15.944965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.535 [2024-04-17 15:39:15.944977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:69704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.535 [2024-04-17 15:39:15.944986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.535 [2024-04-17 15:39:15.944997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:69712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.535 [2024-04-17 15:39:15.945007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.535 [2024-04-17 15:39:15.945018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:69720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.535 [2024-04-17 15:39:15.945131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.535 [2024-04-17 15:39:15.945151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.535 [2024-04-17 15:39:15.945256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.535 [2024-04-17 15:39:15.945275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:69736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.535 [2024-04-17 15:39:15.945285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.535 [2024-04-17 15:39:15.945391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:69744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.535 [2024-04-17 15:39:15.945407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.535 [2024-04-17 15:39:15.945421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:70200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.535 [2024-04-17 15:39:15.945431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.535 [2024-04-17 15:39:15.945442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:70208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.535 [2024-04-17 15:39:15.945683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.535 [2024-04-17 15:39:15.945700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:70216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.535 [2024-04-17 15:39:15.945711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.535 [2024-04-17 15:39:15.945724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:70224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.535 [2024-04-17 15:39:15.945864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.535 [2024-04-17 15:39:15.946102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:70232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.536 [2024-04-17 15:39:15.946120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.536 [2024-04-17 15:39:15.946133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:70240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.536 [2024-04-17 15:39:15.946143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.536 [2024-04-17 15:39:15.946155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:70248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.536 [2024-04-17 15:39:15.946282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.536 [2024-04-17 15:39:15.946420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:70256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.536 [2024-04-17 15:39:15.946538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.536 [2024-04-17 15:39:15.946560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:69752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.536 [2024-04-17 15:39:15.946570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.536 [2024-04-17 15:39:15.946582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:69760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.536 [2024-04-17 15:39:15.946591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.536 [2024-04-17 15:39:15.946603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:69768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.536 [2024-04-17 15:39:15.946613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.536 [2024-04-17 15:39:15.946624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:69776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.536 [2024-04-17 15:39:15.946634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.536 [2024-04-17 15:39:15.946645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:69784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.536 [2024-04-17 15:39:15.946654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.536 [2024-04-17 15:39:15.946665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:69792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.536 [2024-04-17 15:39:15.946675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.536 [2024-04-17 15:39:15.946687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:69800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.536 [2024-04-17 15:39:15.946697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.536 [2024-04-17 15:39:15.946708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:69808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.536 [2024-04-17 15:39:15.946718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.536 [2024-04-17 15:39:15.946729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:69816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.536 [2024-04-17 15:39:15.946739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.536 [2024-04-17 15:39:15.946763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:69824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.536 [2024-04-17 15:39:15.946776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.536 [2024-04-17 15:39:15.946788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:69832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.536 [2024-04-17 15:39:15.946798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.536 [2024-04-17 15:39:15.946810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:69840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.536 [2024-04-17 15:39:15.946819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.536 [2024-04-17 15:39:15.946833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:69848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.536 [2024-04-17 15:39:15.946843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.536 [2024-04-17 15:39:15.946854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:69856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.536 [2024-04-17 15:39:15.946864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.536 [2024-04-17 15:39:15.946877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:69864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.536 [2024-04-17 15:39:15.946888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.536 [2024-04-17 15:39:15.946899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:69872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.536 [2024-04-17 15:39:15.946909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.536 [2024-04-17 15:39:15.946931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:70264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.536 [2024-04-17 15:39:15.946941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.536 [2024-04-17 15:39:15.946952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:70272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.536 [2024-04-17 15:39:15.946961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.536 [2024-04-17 15:39:15.946972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:70280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.536 [2024-04-17 15:39:15.946982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.536 [2024-04-17 15:39:15.946993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:70288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.536 [2024-04-17 15:39:15.947002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.536 [2024-04-17 15:39:15.947013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:70296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.536 [2024-04-17 15:39:15.947023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.536 [2024-04-17 15:39:15.947034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:70304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.536 [2024-04-17 15:39:15.947044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.536 [2024-04-17 15:39:15.947056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:70312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.536 [2024-04-17 15:39:15.947066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.536 [2024-04-17 15:39:15.947077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:70320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.536 [2024-04-17 15:39:15.947087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.536 [2024-04-17 15:39:15.947099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.536 [2024-04-17 15:39:15.947109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.536 [2024-04-17 15:39:15.947121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:69888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.536 [2024-04-17 15:39:15.947140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.536 [2024-04-17 15:39:15.947154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:69896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.536 [2024-04-17 15:39:15.947164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.536 [2024-04-17 15:39:15.947176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:69904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.536 [2024-04-17 15:39:15.947186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.536 [2024-04-17 15:39:15.947198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:69912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.536 [2024-04-17 15:39:15.947207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.536 [2024-04-17 15:39:15.947218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:69920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.536 [2024-04-17 15:39:15.947228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.536 [2024-04-17 15:39:15.947239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:69928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.536 [2024-04-17 15:39:15.947249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.536 [2024-04-17 15:39:15.947261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:69936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.536 [2024-04-17 15:39:15.947271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.536 [2024-04-17 15:39:15.947282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:69944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.536 [2024-04-17 15:39:15.947291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.537 [2024-04-17 15:39:15.947303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:69952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.537 [2024-04-17 15:39:15.947314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.537 [2024-04-17 15:39:15.947325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:69960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.537 [2024-04-17 15:39:15.947335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.537 [2024-04-17 15:39:15.947346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:69968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.537 [2024-04-17 15:39:15.947356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.537 [2024-04-17 15:39:15.947367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:69976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.537 [2024-04-17 15:39:15.947377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.537 [2024-04-17 15:39:15.947389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.537 [2024-04-17 15:39:15.947399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.537 [2024-04-17 15:39:15.947410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:69992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.537 [2024-04-17 15:39:15.947420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.537 [2024-04-17 15:39:15.947432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:70000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.537 [2024-04-17 15:39:15.947442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.537 [2024-04-17 15:39:15.947454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:70008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.537 [2024-04-17 15:39:15.947463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.537 [2024-04-17 15:39:15.947474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:70016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.537 [2024-04-17 15:39:15.947484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.537 [2024-04-17 15:39:15.947495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:70024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.537 [2024-04-17 15:39:15.947504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.537 [2024-04-17 15:39:15.947515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:70032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.537 [2024-04-17 15:39:15.947525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.537 [2024-04-17 15:39:15.947536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:70040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.537 [2024-04-17 15:39:15.947546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.537 [2024-04-17 15:39:15.947557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:70048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.537 [2024-04-17 15:39:15.947566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.537 [2024-04-17 15:39:15.947578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:70056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.537 [2024-04-17 15:39:15.947588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.537 [2024-04-17 15:39:15.947599] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182c2e0 is same with the state(5) to be set 00:16:14.537 [2024-04-17 15:39:15.947614] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:14.537 [2024-04-17 15:39:15.947622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:14.537 [2024-04-17 15:39:15.947631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70064 len:8 PRP1 0x0 PRP2 0x0 00:16:14.537 [2024-04-17 15:39:15.947641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.537 [2024-04-17 15:39:15.947710] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x182c2e0 was disconnected and freed. reset controller. 00:16:14.537 [2024-04-17 15:39:15.947970] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:14.537 [2024-04-17 15:39:15.948062] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c3dc0 (9): Bad file descriptor 00:16:14.537 [2024-04-17 15:39:15.948180] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:14.537 [2024-04-17 15:39:15.948234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:14.537 [2024-04-17 15:39:15.948277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:14.537 [2024-04-17 15:39:15.948293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c3dc0 with addr=10.0.0.2, port=4420 00:16:14.537 [2024-04-17 15:39:15.948305] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c3dc0 is same with the state(5) to be set 00:16:14.537 [2024-04-17 15:39:15.948324] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c3dc0 (9): Bad file descriptor 00:16:14.537 [2024-04-17 15:39:15.948341] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:14.537 [2024-04-17 15:39:15.948351] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:14.537 [2024-04-17 15:39:15.948362] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:14.537 [2024-04-17 15:39:15.948382] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:14.537 [2024-04-17 15:39:15.948393] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:14.537 15:39:15 -- host/timeout.sh@101 -- # sleep 3 00:16:15.910 [2024-04-17 15:39:16.948547] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:15.910 [2024-04-17 15:39:16.948670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:15.910 [2024-04-17 15:39:16.948715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:15.910 [2024-04-17 15:39:16.948732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c3dc0 with addr=10.0.0.2, port=4420 00:16:15.910 [2024-04-17 15:39:16.948749] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c3dc0 is same with the state(5) to be set 00:16:15.910 [2024-04-17 15:39:16.948792] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c3dc0 (9): Bad file descriptor 00:16:15.910 [2024-04-17 15:39:16.948814] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:15.910 [2024-04-17 15:39:16.948825] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:15.910 [2024-04-17 15:39:16.948836] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:15.910 [2024-04-17 15:39:16.948867] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:15.910 [2024-04-17 15:39:16.948880] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:16.842 [2024-04-17 15:39:17.949068] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:16.842 [2024-04-17 15:39:17.949193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:16.842 [2024-04-17 15:39:17.949238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:16.842 [2024-04-17 15:39:17.949255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c3dc0 with addr=10.0.0.2, port=4420 00:16:16.842 [2024-04-17 15:39:17.949271] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c3dc0 is same with the state(5) to be set 00:16:16.842 [2024-04-17 15:39:17.949301] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c3dc0 (9): Bad file descriptor 00:16:16.842 [2024-04-17 15:39:17.949322] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:16.842 [2024-04-17 15:39:17.949332] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:16.842 [2024-04-17 15:39:17.949344] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:16.842 [2024-04-17 15:39:17.949375] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:16.842 [2024-04-17 15:39:17.949387] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.776 [2024-04-17 15:39:18.952865] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.776 [2024-04-17 15:39:18.952991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.776 [2024-04-17 15:39:18.953037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:17.777 [2024-04-17 15:39:18.953053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c3dc0 with addr=10.0.0.2, port=4420 00:16:17.777 [2024-04-17 15:39:18.953070] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c3dc0 is same with the state(5) to be set 00:16:17.777 [2024-04-17 15:39:18.953327] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c3dc0 (9): Bad file descriptor 00:16:17.777 [2024-04-17 15:39:18.953572] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:17.777 [2024-04-17 15:39:18.953587] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:17.777 [2024-04-17 15:39:18.953599] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:17.777 [2024-04-17 15:39:18.957571] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:17.777 [2024-04-17 15:39:18.957605] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.777 15:39:18 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:17.777 [2024-04-17 15:39:19.178448] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:17.777 15:39:19 -- host/timeout.sh@103 -- # wait 78562 00:16:18.708 [2024-04-17 15:39:19.993152] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:23.975 00:16:23.975 Latency(us) 00:16:23.975 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:23.975 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:23.975 Verification LBA range: start 0x0 length 0x4000 00:16:23.975 NVMe0n1 : 10.01 5105.07 19.94 3677.34 0.00 14539.58 677.70 3019898.88 00:16:23.975 =================================================================================================================== 00:16:23.975 Total : 5105.07 19.94 3677.34 0.00 14539.58 0.00 3019898.88 00:16:23.975 0 00:16:23.975 15:39:24 -- host/timeout.sh@105 -- # killprocess 78434 00:16:23.975 15:39:24 -- common/autotest_common.sh@936 -- # '[' -z 78434 ']' 00:16:23.975 15:39:24 -- common/autotest_common.sh@940 -- # kill -0 78434 00:16:23.975 15:39:24 -- common/autotest_common.sh@941 -- # uname 00:16:23.975 15:39:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:23.975 15:39:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78434 00:16:23.975 15:39:24 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:23.975 15:39:24 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:23.975 killing process with pid 78434 00:16:23.975 15:39:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78434' 00:16:23.975 15:39:24 -- common/autotest_common.sh@955 -- # kill 78434 00:16:23.975 Received shutdown signal, test time was about 10.000000 seconds 00:16:23.975 00:16:23.975 Latency(us) 00:16:23.975 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:23.975 =================================================================================================================== 00:16:23.975 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:23.975 15:39:24 -- common/autotest_common.sh@960 -- # wait 78434 00:16:23.975 15:39:25 -- host/timeout.sh@110 -- # bdevperf_pid=78681 00:16:23.975 15:39:25 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:16:23.975 15:39:25 -- host/timeout.sh@112 -- # waitforlisten 78681 /var/tmp/bdevperf.sock 00:16:23.975 15:39:25 -- common/autotest_common.sh@817 -- # '[' -z 78681 ']' 00:16:23.975 15:39:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:23.975 15:39:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:23.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:23.975 15:39:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:23.975 15:39:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:23.975 15:39:25 -- common/autotest_common.sh@10 -- # set +x 00:16:23.975 [2024-04-17 15:39:25.254564] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:16:23.975 [2024-04-17 15:39:25.254654] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78681 ] 00:16:23.975 [2024-04-17 15:39:25.387778] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.234 [2024-04-17 15:39:25.540172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:24.801 15:39:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:24.801 15:39:26 -- common/autotest_common.sh@850 -- # return 0 00:16:24.801 15:39:26 -- host/timeout.sh@116 -- # dtrace_pid=78693 00:16:24.801 15:39:26 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 78681 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:16:24.801 15:39:26 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:16:25.059 15:39:26 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:16:25.626 NVMe0n1 00:16:25.626 15:39:26 -- host/timeout.sh@124 -- # rpc_pid=78734 00:16:25.626 15:39:26 -- host/timeout.sh@125 -- # sleep 1 00:16:25.626 15:39:26 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:25.626 Running I/O for 10 seconds... 00:16:26.563 15:39:27 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:26.823 [2024-04-17 15:39:28.089913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.824 [2024-04-17 15:39:28.090976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.824 [2024-04-17 15:39:28.091035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:112272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.824 [2024-04-17 15:39:28.091052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.824 [2024-04-17 15:39:28.091065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:37032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.824 [2024-04-17 15:39:28.091076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.824 [2024-04-17 15:39:28.091089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:31904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.824 [2024-04-17 15:39:28.091098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.824 [2024-04-17 15:39:28.091110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.824 [2024-04-17 15:39:28.091120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.824 [2024-04-17 15:39:28.091132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:101976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.824 [2024-04-17 15:39:28.091153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.824 [2024-04-17 15:39:28.091169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:54080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.824 [2024-04-17 15:39:28.091179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.824 [2024-04-17 15:39:28.091191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:51416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.824 [2024-04-17 15:39:28.091490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.824 [2024-04-17 15:39:28.091505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:47640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.824 [2024-04-17 15:39:28.091516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.824 [2024-04-17 15:39:28.091528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.824 [2024-04-17 15:39:28.091538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.824 [2024-04-17 15:39:28.091550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:68760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.824 [2024-04-17 15:39:28.091559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.824 [2024-04-17 15:39:28.091571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.824 [2024-04-17 15:39:28.091584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.824 [2024-04-17 15:39:28.091842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:47336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.824 [2024-04-17 15:39:28.091856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.824 [2024-04-17 15:39:28.091868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:105784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.824 [2024-04-17 15:39:28.091878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.824 [2024-04-17 15:39:28.091891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:43240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.824 [2024-04-17 15:39:28.091900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.824 [2024-04-17 15:39:28.091912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:33192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.824 [2024-04-17 15:39:28.091921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.824 [2024-04-17 15:39:28.092162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:64224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.824 [2024-04-17 15:39:28.092186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.824 [2024-04-17 15:39:28.092199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:101512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.824 [2024-04-17 15:39:28.092209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.824 [2024-04-17 15:39:28.092221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:45552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.824 [2024-04-17 15:39:28.092231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.824 [2024-04-17 15:39:28.092242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:87072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.824 [2024-04-17 15:39:28.092252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.824 [2024-04-17 15:39:28.092264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:122496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.824 [2024-04-17 15:39:28.092278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.824 [2024-04-17 15:39:28.092529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:102704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.824 [2024-04-17 15:39:28.092542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.824 [2024-04-17 15:39:28.092554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:57072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.824 [2024-04-17 15:39:28.092564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.824 [2024-04-17 15:39:28.092576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:48032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.824 [2024-04-17 15:39:28.092586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.824 [2024-04-17 15:39:28.092597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.824 [2024-04-17 15:39:28.092607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.824 [2024-04-17 15:39:28.092619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:122616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.824 [2024-04-17 15:39:28.092629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.824 [2024-04-17 15:39:28.092640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:17584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.824 [2024-04-17 15:39:28.092650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.824 [2024-04-17 15:39:28.092663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:83664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.824 [2024-04-17 15:39:28.092673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.824 [2024-04-17 15:39:28.092686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.824 [2024-04-17 15:39:28.092696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.824 [2024-04-17 15:39:28.092708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:123736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.824 [2024-04-17 15:39:28.092717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.824 [2024-04-17 15:39:28.092730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.824 [2024-04-17 15:39:28.092740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.824 [2024-04-17 15:39:28.092763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:124368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.824 [2024-04-17 15:39:28.092775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.824 [2024-04-17 15:39:28.092787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:34592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.824 [2024-04-17 15:39:28.092798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.824 [2024-04-17 15:39:28.092810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:120104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.824 [2024-04-17 15:39:28.092822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.824 [2024-04-17 15:39:28.092834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:123160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.824 [2024-04-17 15:39:28.092844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.824 [2024-04-17 15:39:28.092857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:105040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.824 [2024-04-17 15:39:28.092867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.824 [2024-04-17 15:39:28.092879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:41112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-04-17 15:39:28.092889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-04-17 15:39:28.092901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:83152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-04-17 15:39:28.092911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-04-17 15:39:28.092923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:50360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-04-17 15:39:28.092933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-04-17 15:39:28.092944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:51048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-04-17 15:39:28.092957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-04-17 15:39:28.092968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:63192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-04-17 15:39:28.092980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-04-17 15:39:28.092991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-04-17 15:39:28.093002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-04-17 15:39:28.093013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-04-17 15:39:28.093023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-04-17 15:39:28.093035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:41400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-04-17 15:39:28.093044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-04-17 15:39:28.093055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:88408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-04-17 15:39:28.093065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-04-17 15:39:28.093076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:130752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-04-17 15:39:28.093086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-04-17 15:39:28.093097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-04-17 15:39:28.093106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-04-17 15:39:28.093117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:106360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-04-17 15:39:28.093438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-04-17 15:39:28.093466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:44328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-04-17 15:39:28.093478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-04-17 15:39:28.093490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:55640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-04-17 15:39:28.093500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-04-17 15:39:28.093512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:122664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-04-17 15:39:28.093522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-04-17 15:39:28.093534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:59440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-04-17 15:39:28.093544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-04-17 15:39:28.093555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-04-17 15:39:28.093565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-04-17 15:39:28.093577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:37456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-04-17 15:39:28.093586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-04-17 15:39:28.093598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:50608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-04-17 15:39:28.093607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-04-17 15:39:28.093619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:66840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-04-17 15:39:28.093629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-04-17 15:39:28.093640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:43040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-04-17 15:39:28.093651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-04-17 15:39:28.093663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:59800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-04-17 15:39:28.093673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-04-17 15:39:28.093685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-04-17 15:39:28.093695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-04-17 15:39:28.093707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:61656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-04-17 15:39:28.093716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-04-17 15:39:28.093728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:120032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-04-17 15:39:28.093739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-04-17 15:39:28.093763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:112784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-04-17 15:39:28.093775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-04-17 15:39:28.093787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:53160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-04-17 15:39:28.093797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-04-17 15:39:28.093809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:92040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-04-17 15:39:28.093819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-04-17 15:39:28.093832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:73200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-04-17 15:39:28.093842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-04-17 15:39:28.093854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:49688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-04-17 15:39:28.093864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-04-17 15:39:28.093877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:52512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-04-17 15:39:28.093887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-04-17 15:39:28.093899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:104136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-04-17 15:39:28.093909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-04-17 15:39:28.093920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:50144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-04-17 15:39:28.093931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-04-17 15:39:28.093943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:115360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-04-17 15:39:28.093953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-04-17 15:39:28.093964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:33480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-04-17 15:39:28.093974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-04-17 15:39:28.093986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:101840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-04-17 15:39:28.093995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-04-17 15:39:28.094007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:56496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-04-17 15:39:28.094017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-04-17 15:39:28.094030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:40968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-04-17 15:39:28.094040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-04-17 15:39:28.094052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:83816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-04-17 15:39:28.094062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-04-17 15:39:28.094074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:36144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-04-17 15:39:28.094084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-04-17 15:39:28.094096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-04-17 15:39:28.094106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-04-17 15:39:28.094118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:72112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-04-17 15:39:28.094128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-04-17 15:39:28.094140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-04-17 15:39:28.094150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-04-17 15:39:28.094161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:104208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-04-17 15:39:28.094171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-04-17 15:39:28.094182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-04-17 15:39:28.094193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-04-17 15:39:28.094205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:71800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-04-17 15:39:28.094221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-04-17 15:39:28.094233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:52224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-04-17 15:39:28.094243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-04-17 15:39:28.094255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:73504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-04-17 15:39:28.094265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-04-17 15:39:28.094277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:116496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-04-17 15:39:28.094287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-04-17 15:39:28.094299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:32456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-04-17 15:39:28.094310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-04-17 15:39:28.094322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-04-17 15:39:28.094331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-04-17 15:39:28.094343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-04-17 15:39:28.094353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-04-17 15:39:28.094366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-04-17 15:39:28.094376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-04-17 15:39:28.094388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:63632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-04-17 15:39:28.094399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-04-17 15:39:28.094411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-04-17 15:39:28.094421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-04-17 15:39:28.094433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:85944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-04-17 15:39:28.094443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-04-17 15:39:28.094454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:87928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-04-17 15:39:28.094465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-04-17 15:39:28.094477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-04-17 15:39:28.094487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-04-17 15:39:28.094499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:101480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-04-17 15:39:28.094509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-04-17 15:39:28.094521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:116496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-04-17 15:39:28.094530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-04-17 15:39:28.094542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:99288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-04-17 15:39:28.094552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-04-17 15:39:28.094564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:25968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-04-17 15:39:28.094574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-04-17 15:39:28.094587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:122048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-04-17 15:39:28.094598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-04-17 15:39:28.094609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-04-17 15:39:28.094619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-04-17 15:39:28.094631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:38720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-04-17 15:39:28.094641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-04-17 15:39:28.094652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:61384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-04-17 15:39:28.094662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-04-17 15:39:28.094674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-04-17 15:39:28.094684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-04-17 15:39:28.094695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:74768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-04-17 15:39:28.094705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-04-17 15:39:28.094716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:62368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-04-17 15:39:28.094726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-04-17 15:39:28.094738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:130088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-04-17 15:39:28.094748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-04-17 15:39:28.094771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:129320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-04-17 15:39:28.094781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-04-17 15:39:28.094793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:94592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-04-17 15:39:28.094803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-04-17 15:39:28.094816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-04-17 15:39:28.094826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-04-17 15:39:28.094837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:71144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-04-17 15:39:28.094856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-04-17 15:39:28.094868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:47160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-04-17 15:39:28.094877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-04-17 15:39:28.094889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-04-17 15:39:28.094899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-04-17 15:39:28.094910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:105208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-04-17 15:39:28.094921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-04-17 15:39:28.094932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-04-17 15:39:28.094942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-04-17 15:39:28.094960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:106600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-04-17 15:39:28.094969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-04-17 15:39:28.094981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-04-17 15:39:28.094990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-04-17 15:39:28.095002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:58752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.827 [2024-04-17 15:39:28.095015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.827 [2024-04-17 15:39:28.095026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:81576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.827 [2024-04-17 15:39:28.095036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.827 [2024-04-17 15:39:28.095048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:126200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.827 [2024-04-17 15:39:28.095057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.827 [2024-04-17 15:39:28.095078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.827 [2024-04-17 15:39:28.095088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.827 [2024-04-17 15:39:28.095100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:30728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.827 [2024-04-17 15:39:28.095110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.827 [2024-04-17 15:39:28.095124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:111584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.827 [2024-04-17 15:39:28.095133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.827 [2024-04-17 15:39:28.095155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:63136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.827 [2024-04-17 15:39:28.095900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.827 [2024-04-17 15:39:28.095916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:60272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.827 [2024-04-17 15:39:28.095926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.827 [2024-04-17 15:39:28.095938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:53288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.827 [2024-04-17 15:39:28.095949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.827 [2024-04-17 15:39:28.095960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:115472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.827 [2024-04-17 15:39:28.095970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.827 [2024-04-17 15:39:28.095981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.827 [2024-04-17 15:39:28.095991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.827 [2024-04-17 15:39:28.096003] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcadfa0 is same with the state(5) to be set 00:16:26.827 [2024-04-17 15:39:28.096017] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:26.827 [2024-04-17 15:39:28.096025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:26.827 [2024-04-17 15:39:28.096033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68720 len:8 PRP1 0x0 PRP2 0x0 00:16:26.827 [2024-04-17 15:39:28.096043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.827 [2024-04-17 15:39:28.096366] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xcadfa0 was disconnected and freed. reset controller. 00:16:26.827 [2024-04-17 15:39:28.096726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:26.827 [2024-04-17 15:39:28.096767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.827 [2024-04-17 15:39:28.096782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:26.827 [2024-04-17 15:39:28.096792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.827 [2024-04-17 15:39:28.096802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:26.827 [2024-04-17 15:39:28.096811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.827 [2024-04-17 15:39:28.096821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:26.827 [2024-04-17 15:39:28.096831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.827 [2024-04-17 15:39:28.096840] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6b030 is same with the state(5) to be set 00:16:26.827 [2024-04-17 15:39:28.097494] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:26.827 [2024-04-17 15:39:28.097534] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6b030 (9): Bad file descriptor 00:16:26.827 [2024-04-17 15:39:28.097680] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:26.827 [2024-04-17 15:39:28.097783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:26.827 [2024-04-17 15:39:28.097833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:26.827 [2024-04-17 15:39:28.097850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6b030 with addr=10.0.0.2, port=4420 00:16:26.827 [2024-04-17 15:39:28.097862] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6b030 is same with the state(5) to be set 00:16:26.827 [2024-04-17 15:39:28.097883] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6b030 (9): Bad file descriptor 00:16:26.827 [2024-04-17 15:39:28.097900] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:26.827 [2024-04-17 15:39:28.097911] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:26.827 [2024-04-17 15:39:28.097922] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:26.827 [2024-04-17 15:39:28.097949] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:26.827 [2024-04-17 15:39:28.097961] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:26.827 15:39:28 -- host/timeout.sh@128 -- # wait 78734 00:16:28.729 [2024-04-17 15:39:30.098165] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:28.729 [2024-04-17 15:39:30.098292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:28.729 [2024-04-17 15:39:30.098339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:28.729 [2024-04-17 15:39:30.098356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6b030 with addr=10.0.0.2, port=4420 00:16:28.729 [2024-04-17 15:39:30.098372] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6b030 is same with the state(5) to be set 00:16:28.729 [2024-04-17 15:39:30.098401] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6b030 (9): Bad file descriptor 00:16:28.729 [2024-04-17 15:39:30.098423] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:28.730 [2024-04-17 15:39:30.098434] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:28.730 [2024-04-17 15:39:30.098446] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:28.730 [2024-04-17 15:39:30.098475] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:28.730 [2024-04-17 15:39:30.098488] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:31.287 [2024-04-17 15:39:32.098745] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:31.287 [2024-04-17 15:39:32.098865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:31.287 [2024-04-17 15:39:32.098912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:31.287 [2024-04-17 15:39:32.098929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6b030 with addr=10.0.0.2, port=4420 00:16:31.287 [2024-04-17 15:39:32.098946] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6b030 is same with the state(5) to be set 00:16:31.287 [2024-04-17 15:39:32.098977] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6b030 (9): Bad file descriptor 00:16:31.287 [2024-04-17 15:39:32.099006] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:31.287 [2024-04-17 15:39:32.099017] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:31.287 [2024-04-17 15:39:32.099029] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:31.287 [2024-04-17 15:39:32.099059] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:31.287 [2024-04-17 15:39:32.099072] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:32.662 [2024-04-17 15:39:34.099205] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:34.036 00:16:34.036 Latency(us) 00:16:34.036 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:34.036 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:16:34.036 NVMe0n1 : 8.18 2110.49 8.24 15.64 0.00 60120.15 8400.52 7046430.72 00:16:34.036 =================================================================================================================== 00:16:34.036 Total : 2110.49 8.24 15.64 0.00 60120.15 8400.52 7046430.72 00:16:34.036 0 00:16:34.036 15:39:35 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:34.036 Attaching 5 probes... 00:16:34.036 1329.947989: reset bdev controller NVMe0 00:16:34.036 1330.047670: reconnect bdev controller NVMe0 00:16:34.036 3330.465579: reconnect delay bdev controller NVMe0 00:16:34.036 3330.488128: reconnect bdev controller NVMe0 00:16:34.036 5331.003053: reconnect delay bdev controller NVMe0 00:16:34.036 5331.031551: reconnect bdev controller NVMe0 00:16:34.036 7331.585426: reconnect delay bdev controller NVMe0 00:16:34.036 7331.618865: reconnect bdev controller NVMe0 00:16:34.036 15:39:35 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:16:34.036 15:39:35 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:16:34.036 15:39:35 -- host/timeout.sh@136 -- # kill 78693 00:16:34.036 15:39:35 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:34.036 15:39:35 -- host/timeout.sh@139 -- # killprocess 78681 00:16:34.036 15:39:35 -- common/autotest_common.sh@936 -- # '[' -z 78681 ']' 00:16:34.036 15:39:35 -- common/autotest_common.sh@940 -- # kill -0 78681 00:16:34.036 15:39:35 -- common/autotest_common.sh@941 -- # uname 00:16:34.036 15:39:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:34.036 15:39:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78681 00:16:34.036 killing process with pid 78681 00:16:34.036 Received shutdown signal, test time was about 8.244544 seconds 00:16:34.036 00:16:34.036 Latency(us) 00:16:34.036 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:34.036 =================================================================================================================== 00:16:34.036 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:34.036 15:39:35 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:34.036 15:39:35 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:34.036 15:39:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78681' 00:16:34.036 15:39:35 -- common/autotest_common.sh@955 -- # kill 78681 00:16:34.036 15:39:35 -- common/autotest_common.sh@960 -- # wait 78681 00:16:34.294 15:39:35 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:34.294 15:39:35 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:16:34.294 15:39:35 -- host/timeout.sh@145 -- # nvmftestfini 00:16:34.294 15:39:35 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:34.294 15:39:35 -- nvmf/common.sh@117 -- # sync 00:16:34.555 15:39:35 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:34.555 15:39:35 -- nvmf/common.sh@120 -- # set +e 00:16:34.555 15:39:35 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:34.555 15:39:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:34.555 rmmod nvme_tcp 00:16:34.555 rmmod nvme_fabrics 00:16:34.555 rmmod nvme_keyring 00:16:34.555 15:39:35 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:34.555 15:39:35 -- nvmf/common.sh@124 -- # set -e 00:16:34.555 15:39:35 -- nvmf/common.sh@125 -- # return 0 00:16:34.555 15:39:35 -- nvmf/common.sh@478 -- # '[' -n 78233 ']' 00:16:34.555 15:39:35 -- nvmf/common.sh@479 -- # killprocess 78233 00:16:34.555 15:39:35 -- common/autotest_common.sh@936 -- # '[' -z 78233 ']' 00:16:34.555 15:39:35 -- common/autotest_common.sh@940 -- # kill -0 78233 00:16:34.555 15:39:35 -- common/autotest_common.sh@941 -- # uname 00:16:34.555 15:39:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:34.555 15:39:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78233 00:16:34.555 killing process with pid 78233 00:16:34.555 15:39:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:34.555 15:39:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:34.555 15:39:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78233' 00:16:34.555 15:39:35 -- common/autotest_common.sh@955 -- # kill 78233 00:16:34.555 15:39:35 -- common/autotest_common.sh@960 -- # wait 78233 00:16:34.813 15:39:36 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:34.813 15:39:36 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:34.813 15:39:36 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:34.813 15:39:36 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:34.813 15:39:36 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:34.813 15:39:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:34.813 15:39:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:34.813 15:39:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.072 15:39:36 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:35.072 00:16:35.072 real 0m47.590s 00:16:35.072 user 2m19.040s 00:16:35.072 sys 0m5.946s 00:16:35.072 15:39:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:35.072 15:39:36 -- common/autotest_common.sh@10 -- # set +x 00:16:35.072 ************************************ 00:16:35.072 END TEST nvmf_timeout 00:16:35.072 ************************************ 00:16:35.072 15:39:36 -- nvmf/nvmf.sh@118 -- # [[ virt == phy ]] 00:16:35.072 15:39:36 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:16:35.072 15:39:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:35.072 15:39:36 -- common/autotest_common.sh@10 -- # set +x 00:16:35.072 15:39:36 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:16:35.072 ************************************ 00:16:35.072 END TEST nvmf_tcp 00:16:35.072 ************************************ 00:16:35.072 00:16:35.072 real 8m56.502s 00:16:35.072 user 21m0.153s 00:16:35.072 sys 2m29.185s 00:16:35.072 15:39:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:35.072 15:39:36 -- common/autotest_common.sh@10 -- # set +x 00:16:35.072 15:39:36 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 00:16:35.072 15:39:36 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:16:35.072 15:39:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:35.072 15:39:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:35.072 15:39:36 -- common/autotest_common.sh@10 -- # set +x 00:16:35.072 ************************************ 00:16:35.072 START TEST nvmf_dif 00:16:35.072 ************************************ 00:16:35.072 15:39:36 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:16:35.331 * Looking for test storage... 00:16:35.331 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:35.331 15:39:36 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:35.331 15:39:36 -- nvmf/common.sh@7 -- # uname -s 00:16:35.331 15:39:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:35.331 15:39:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:35.331 15:39:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:35.331 15:39:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:35.331 15:39:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:35.331 15:39:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:35.331 15:39:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:35.331 15:39:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:35.331 15:39:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:35.331 15:39:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:35.331 15:39:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02dfa913-00e4-4a25-ab2c-855f7283d4db 00:16:35.331 15:39:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=02dfa913-00e4-4a25-ab2c-855f7283d4db 00:16:35.331 15:39:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:35.331 15:39:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:35.331 15:39:36 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:35.331 15:39:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:35.331 15:39:36 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:35.331 15:39:36 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:35.331 15:39:36 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:35.331 15:39:36 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:35.331 15:39:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.331 15:39:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.331 15:39:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.331 15:39:36 -- paths/export.sh@5 -- # export PATH 00:16:35.331 15:39:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.331 15:39:36 -- nvmf/common.sh@47 -- # : 0 00:16:35.331 15:39:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:35.331 15:39:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:35.331 15:39:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:35.331 15:39:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:35.331 15:39:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:35.331 15:39:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:35.331 15:39:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:35.331 15:39:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:35.331 15:39:36 -- target/dif.sh@15 -- # NULL_META=16 00:16:35.331 15:39:36 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:16:35.331 15:39:36 -- target/dif.sh@15 -- # NULL_SIZE=64 00:16:35.331 15:39:36 -- target/dif.sh@15 -- # NULL_DIF=1 00:16:35.331 15:39:36 -- target/dif.sh@135 -- # nvmftestinit 00:16:35.331 15:39:36 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:35.331 15:39:36 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:35.331 15:39:36 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:35.331 15:39:36 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:35.331 15:39:36 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:35.331 15:39:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.331 15:39:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:16:35.331 15:39:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.331 15:39:36 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:16:35.331 15:39:36 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:16:35.331 15:39:36 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:16:35.331 15:39:36 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:16:35.331 15:39:36 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:16:35.331 15:39:36 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:16:35.331 15:39:36 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:35.331 15:39:36 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:35.331 15:39:36 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:35.331 15:39:36 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:35.331 15:39:36 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:35.331 15:39:36 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:35.331 15:39:36 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:35.331 15:39:36 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:35.331 15:39:36 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:35.331 15:39:36 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:35.331 15:39:36 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:35.331 15:39:36 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:35.331 15:39:36 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:35.331 15:39:36 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:35.331 Cannot find device "nvmf_tgt_br" 00:16:35.331 15:39:36 -- nvmf/common.sh@155 -- # true 00:16:35.331 15:39:36 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:35.331 Cannot find device "nvmf_tgt_br2" 00:16:35.331 15:39:36 -- nvmf/common.sh@156 -- # true 00:16:35.331 15:39:36 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:35.331 15:39:36 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:35.331 Cannot find device "nvmf_tgt_br" 00:16:35.331 15:39:36 -- nvmf/common.sh@158 -- # true 00:16:35.331 15:39:36 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:35.331 Cannot find device "nvmf_tgt_br2" 00:16:35.331 15:39:36 -- nvmf/common.sh@159 -- # true 00:16:35.331 15:39:36 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:35.331 15:39:36 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:35.331 15:39:36 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:35.331 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:35.331 15:39:36 -- nvmf/common.sh@162 -- # true 00:16:35.331 15:39:36 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:35.331 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:35.331 15:39:36 -- nvmf/common.sh@163 -- # true 00:16:35.331 15:39:36 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:35.590 15:39:36 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:35.590 15:39:36 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:35.590 15:39:36 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:35.590 15:39:36 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:35.590 15:39:36 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:35.590 15:39:36 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:35.590 15:39:36 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:35.590 15:39:36 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:35.590 15:39:36 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:35.590 15:39:36 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:35.590 15:39:36 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:35.590 15:39:36 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:35.590 15:39:36 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:35.590 15:39:36 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:35.590 15:39:36 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:35.590 15:39:36 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:35.590 15:39:36 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:35.590 15:39:36 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:35.590 15:39:36 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:35.590 15:39:36 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:35.590 15:39:36 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:35.590 15:39:36 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:35.590 15:39:36 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:35.590 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:35.590 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:16:35.590 00:16:35.590 --- 10.0.0.2 ping statistics --- 00:16:35.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.590 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:16:35.590 15:39:36 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:35.590 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:35.590 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:16:35.590 00:16:35.590 --- 10.0.0.3 ping statistics --- 00:16:35.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.590 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:16:35.590 15:39:36 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:35.590 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:35.590 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:16:35.590 00:16:35.590 --- 10.0.0.1 ping statistics --- 00:16:35.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.590 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:16:35.590 15:39:36 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:35.590 15:39:36 -- nvmf/common.sh@422 -- # return 0 00:16:35.590 15:39:36 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:16:35.590 15:39:36 -- nvmf/common.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:35.848 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:36.116 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:36.116 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:36.116 15:39:37 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:36.116 15:39:37 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:36.116 15:39:37 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:36.116 15:39:37 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:36.116 15:39:37 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:36.116 15:39:37 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:36.116 15:39:37 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:16:36.116 15:39:37 -- target/dif.sh@137 -- # nvmfappstart 00:16:36.116 15:39:37 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:36.116 15:39:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:36.116 15:39:37 -- common/autotest_common.sh@10 -- # set +x 00:16:36.116 15:39:37 -- nvmf/common.sh@470 -- # nvmfpid=79172 00:16:36.116 15:39:37 -- nvmf/common.sh@471 -- # waitforlisten 79172 00:16:36.116 15:39:37 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:36.116 15:39:37 -- common/autotest_common.sh@817 -- # '[' -z 79172 ']' 00:16:36.116 15:39:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.116 15:39:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:36.116 15:39:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.116 15:39:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:36.116 15:39:37 -- common/autotest_common.sh@10 -- # set +x 00:16:36.116 [2024-04-17 15:39:37.427353] Starting SPDK v24.05-pre git sha1 480afb9a1 / DPDK 23.11.0 initialization... 00:16:36.116 [2024-04-17 15:39:37.427479] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:36.376 [2024-04-17 15:39:37.568505] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.376 [2024-04-17 15:39:37.727736] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:36.376 [2024-04-17 15:39:37.727851] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:36.376 [2024-04-17 15:39:37.727865] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:36.376 [2024-04-17 15:39:37.727876] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:36.376 [2024-04-17 15:39:37.727886] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:36.376 [2024-04-17 15:39:37.727933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.311 15:39:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:37.311 15:39:38 -- common/autotest_common.sh@850 -- # return 0 00:16:37.311 15:39:38 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:37.311 15:39:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:37.311 15:39:38 -- common/autotest_common.sh@10 -- # set +x 00:16:37.311 15:39:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:37.311 15:39:38 -- target/dif.sh@139 -- # create_transport 00:16:37.311 15:39:38 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:16:37.311 15:39:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:37.311 15:39:38 -- common/autotest_common.sh@10 -- # set +x 00:16:37.311 [2024-04-17 15:39:38.433469] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:37.311 15:39:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:37.311 15:39:38 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:16:37.311 15:39:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:37.311 15:39:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:37.311 15:39:38 -- common/autotest_common.sh@10 -- # set +x 00:16:37.311 ************************************ 00:16:37.311 START TEST fio_dif_1_default 00:16:37.311 ************************************ 00:16:37.311 15:39:38 -- common/autotest_common.sh@1111 -- # fio_dif_1 00:16:37.311 15:39:38 -- target/dif.sh@86 -- # create_subsystems 0 00:16:37.311 15:39:38 -- target/dif.sh@28 -- # local sub 00:16:37.311 15:39:38 -- target/dif.sh@30 -- # for sub in "$@" 00:16:37.311 15:39:38 -- target/dif.sh@31 -- # create_subsystem 0 00:16:37.311 15:39:38 -- target/dif.sh@18 -- # local sub_id=0 00:16:37.311 15:39:38 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:16:37.311 15:39:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:37.311 15:39:38 -- common/autotest_common.sh@10 -- # set +x 00:16:37.311 bdev_null0 00:16:37.311 15:39:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:37.311 15:39:38 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:16:37.311 15:39:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:37.311 15:39:38 -- common/autotest_common.sh@10 -- # set +x 00:16:37.311 15:39:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:37.311 15:39:38 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:16:37.311 15:39:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:37.311 15:39:38 -- common/autotest_common.sh@10 -- # set +x 00:16:37.311 15:39:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:37.311 15:39:38 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:37.311 15:39:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:37.311 15:39:38 -- common/autotest_common.sh@10 -- # set +x 00:16:37.311 [2024-04-17 15:39:38.538645] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:37.311 15:39:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:37.311 15:39:38 -- target/dif.sh@87 -- # fio /dev/fd/62 00:16:37.311 15:39:38 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:16:37.311 15:39:38 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:16:37.311 15:39:38 -- nvmf/common.sh@521 -- # config=() 00:16:37.311 15:39:38 -- nvmf/common.sh@521 -- # local subsystem config 00:16:37.311 15:39:38 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:16:37.311 15:39:38 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:37.311 15:39:38 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:16:37.311 15:39:38 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:37.311 { 00:16:37.311 "params": { 00:16:37.311 "name": "Nvme$subsystem", 00:16:37.311 "trtype": "$TEST_TRANSPORT", 00:16:37.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:37.311 "adrfam": "ipv4", 00:16:37.311 "trsvcid": "$NVMF_PORT", 00:16:37.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:37.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:37.311 "hdgst": ${hdgst:-false}, 00:16:37.311 "ddgst": ${ddgst:-false} 00:16:37.311 }, 00:16:37.311 "method": "bdev_nvme_attach_controller" 00:16:37.311 } 00:16:37.311 EOF 00:16:37.311 )") 00:16:37.311 15:39:38 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:16:37.311 15:39:38 -- target/dif.sh@82 -- # gen_fio_conf 00:16:37.311 15:39:38 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:37.311 15:39:38 -- common/autotest_common.sh@1325 -- # local sanitizers 00:16:37.311 15:39:38 -- target/dif.sh@54 -- # local file 00:16:37.311 15:39:38 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:37.311 15:39:38 -- common/autotest_common.sh@1327 -- # shift 00:16:37.311 15:39:38 -- target/dif.sh@56 -- # cat 00:16:37.311 15:39:38 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:16:37.311 15:39:38 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:16:37.311 15:39:38 -- nvmf/common.sh@543 -- # cat 00:16:37.311 15:39:38 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:37.311 15:39:38 -- common/autotest_common.sh@1331 -- # grep libasan 00:16:37.311 15:39:38 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:16:37.311 15:39:38 -- target/dif.sh@72 -- # (( file = 1 )) 00:16:37.311 15:39:38 -- target/dif.sh@72 -- # (( file <= files )) 00:16:37.311 15:39:38 -- nvmf/common.sh@545 -- # jq . 00:16:37.311 15:39:38 -- nvmf/common.sh@546 -- # IFS=, 00:16:37.311 15:39:38 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:37.311 "params": { 00:16:37.311 "name": "Nvme0", 00:16:37.311 "trtype": "tcp", 00:16:37.311 "traddr": "10.0.0.2", 00:16:37.311 "adrfam": "ipv4", 00:16:37.311 "trsvcid": "4420", 00:16:37.311 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:37.311 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:37.311 "hdgst": false, 00:16:37.311 "ddgst": false 00:16:37.311 }, 00:16:37.311 "method": "bdev_nvme_attach_controller" 00:16:37.311 }' 00:16:37.311 15:39:38 -- common/autotest_common.sh@1331 -- # asan_lib= 00:16:37.311 15:39:38 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:16:37.311 15:39:38 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:16:37.311 15:39:38 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:37.311 15:39:38 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:16:37.311 15:39:38 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:16:37.311 15:39:38 -- common/autotest_common.sh@1331 -- # asan_lib= 00:16:37.311 15:39:38 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:16:37.311 15:39:38 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:37.311 15:39:38 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:16:37.311 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:16:37.311 fio-3.35 00:16:37.311 Starting 1 thread 00:16:37.879 [2024-04-17 15:39:39.222555] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:37.879 [2024-04-17 15:39:39.222674] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:50.082 00:16:50.082 filename0: (groupid=0, jobs=1): err= 0: pid=79244: Wed Apr 17 15:39:49 2024 00:16:50.082 read: IOPS=8625, BW=33.7MiB/s (35.3MB/s)(337MiB/10001msec) 00:16:50.082 slat (nsec): min=6415, max=58452, avg=8426.48, stdev=2754.16 00:16:50.082 clat (usec): min=354, max=1489, avg=438.94, stdev=25.85 00:16:50.082 lat (usec): min=361, max=1500, avg=447.37, stdev=26.45 00:16:50.082 clat percentiles (usec): 00:16:50.082 | 1.00th=[ 392], 5.00th=[ 408], 10.00th=[ 412], 20.00th=[ 420], 00:16:50.082 | 30.00th=[ 429], 40.00th=[ 433], 50.00th=[ 437], 60.00th=[ 441], 00:16:50.082 | 70.00th=[ 449], 80.00th=[ 457], 90.00th=[ 469], 95.00th=[ 482], 00:16:50.082 | 99.00th=[ 506], 99.50th=[ 519], 99.90th=[ 603], 99.95th=[ 644], 00:16:50.082 | 99.99th=[ 1020] 00:16:50.082 bw ( KiB/s): min=33984, max=35296, per=100.00%, avg=34506.11, stdev=381.78, samples=19 00:16:50.082 iops : min= 8496, max= 8824, avg=8626.53, stdev=95.44, samples=19 00:16:50.082 lat (usec) : 500=98.62%, 750=1.35%, 1000=0.01% 00:16:50.082 lat (msec) : 2=0.02% 00:16:50.082 cpu : usr=84.61%, sys=13.58%, ctx=11, majf=0, minf=0 00:16:50.082 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:50.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.082 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.082 issued rwts: total=86268,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:50.082 latency : target=0, window=0, percentile=100.00%, depth=4 00:16:50.082 00:16:50.082 Run status group 0 (all jobs): 00:16:50.082 READ: bw=33.7MiB/s (35.3MB/s), 33.7MiB/s-33.7MiB/s (35.3MB/s-35.3MB/s), io=337MiB (353MB), run=10001-10001msec 00:16:50.082 15:39:49 -- target/dif.sh@88 -- # destroy_subsystems 0 00:16:50.082 15:39:49 -- target/dif.sh@43 -- # local sub 00:16:50.082 15:39:49 -- target/dif.sh@45 -- # for sub in "$@" 00:16:50.082 15:39:49 -- target/dif.sh@46 -- # destroy_subsystem 0 00:16:50.082 15:39:49 -- target/dif.sh@36 -- # local sub_id=0 00:16:50.082 15:39:49 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:50.082 15:39:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.082 15:39:49 -- common/autotest_common.sh@10 -- # set +x 00:16:50.082 15:39:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.082 15:39:49 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:16:50.082 15:39:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.082 15:39:49 -- common/autotest_common.sh@10 -- # set +x 00:16:50.082 15:39:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.082 00:16:50.082 real 0m11.145s 00:16:50.082 user 0m9.176s 00:16:50.082 sys 0m1.693s 00:16:50.082 15:39:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:50.082 15:39:49 -- common/autotest_common.sh@10 -- # set +x 00:16:50.082 ************************************ 00:16:50.082 END TEST fio_dif_1_default 00:16:50.082 ************************************ 00:16:50.082 15:39:49 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:16:50.082 15:39:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:50.082 15:39:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:50.082 15:39:49 -- common/autotest_common.sh@10 -- # set +x 00:16:50.082 ************************************ 00:16:50.082 START TEST fio_dif_1_multi_subsystems 00:16:50.082 ************************************ 00:16:50.082 15:39:49 -- common/autotest_common.sh@1111 -- # fio_dif_1_multi_subsystems 00:16:50.082 15:39:49 -- target/dif.sh@92 -- # local files=1 00:16:50.082 15:39:49 -- target/dif.sh@94 -- # create_subsystems 0 1 00:16:50.082 15:39:49 -- target/dif.sh@28 -- # local sub 00:16:50.082 15:39:49 -- target/dif.sh@30 -- # for sub in "$@" 00:16:50.082 15:39:49 -- target/dif.sh@31 -- # create_subsystem 0 00:16:50.082 15:39:49 -- target/dif.sh@18 -- # local sub_id=0 00:16:50.082 15:39:49 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:16:50.082 15:39:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.082 15:39:49 -- common/autotest_common.sh@10 -- # set +x 00:16:50.082 bdev_null0 00:16:50.082 15:39:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.082 15:39:49 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:16:50.082 15:39:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.082 15:39:49 -- common/autotest_common.sh@10 -- # set +x 00:16:50.082 15:39:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.082 15:39:49 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:16:50.082 15:39:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.083 15:39:49 -- common/autotest_common.sh@10 -- # set +x 00:16:50.083 15:39:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.083 15:39:49 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:50.083 15:39:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.083 15:39:49 -- common/autotest_common.sh@10 -- # set +x 00:16:50.083 [2024-04-17 15:39:49.800209] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:50.083 15:39:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.083 15:39:49 -- target/dif.sh@30 -- # for sub in "$@" 00:16:50.083 15:39:49 -- target/dif.sh@31 -- # create_subsystem 1 00:16:50.083 15:39:49 -- target/dif.sh@18 -- # local sub_id=1 00:16:50.083 15:39:49 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:16:50.083 15:39:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.083 15:39:49 -- common/autotest_common.sh@10 -- # set +x 00:16:50.083 bdev_null1 00:16:50.083 15:39:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.083 15:39:49 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:16:50.083 15:39:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.083 15:39:49 -- common/autotest_common.sh@10 -- # set +x 00:16:50.083 15:39:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.083 15:39:49 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:16:50.083 15:39:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.083 15:39:49 -- common/autotest_common.sh@10 -- # set +x 00:16:50.083 15:39:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.083 15:39:49 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:50.083 15:39:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.083 15:39:49 -- common/autotest_common.sh@10 -- # set +x 00:16:50.083 15:39:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.083 15:39:49 -- target/dif.sh@95 -- # fio /dev/fd/62 00:16:50.083 15:39:49 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:16:50.083 15:39:49 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:16:50.083 15:39:49 -- nvmf/common.sh@521 -- # config=() 00:16:50.083 15:39:49 -- nvmf/common.sh@521 -- # local subsystem config 00:16:50.083 15:39:49 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:16:50.083 15:39:49 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:50.083 15:39:49 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:16:50.083 15:39:49 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:50.083 { 00:16:50.083 "params": { 00:16:50.083 "name": "Nvme$subsystem", 00:16:50.083 "trtype": "$TEST_TRANSPORT", 00:16:50.083 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:50.083 "adrfam": "ipv4", 00:16:50.083 "trsvcid": "$NVMF_PORT", 00:16:50.083 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:50.083 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:50.083 "hdgst": ${hdgst:-false}, 00:16:50.083 "ddgst": ${ddgst:-false} 00:16:50.083 }, 00:16:50.083 "method": "bdev_nvme_attach_controller" 00:16:50.083 } 00:16:50.083 EOF 00:16:50.083 )") 00:16:50.083 15:39:49 -- target/dif.sh@82 -- # gen_fio_conf 00:16:50.083 15:39:49 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:16:50.083 15:39:49 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:50.083 15:39:49 -- target/dif.sh@54 -- # local file 00:16:50.083 15:39:49 -- common/autotest_common.sh@1325 -- # local sanitizers 00:16:50.083 15:39:49 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:50.083 15:39:49 -- target/dif.sh@56 -- # cat 00:16:50.083 15:39:49 -- common/autotest_common.sh@1327 -- # shift 00:16:50.083 15:39:49 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:16:50.083 15:39:49 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:16:50.083 15:39:49 -- nvmf/common.sh@543 -- # cat 00:16:50.083 15:39:49 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:50.083 15:39:49 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:16:50.083 15:39:49 -- common/autotest_common.sh@1331 -- # grep libasan 00:16:50.083 15:39:49 -- target/dif.sh@72 -- # (( file = 1 )) 00:16:50.083 15:39:49 -- target/dif.sh@72 -- # (( file <= files )) 00:16:50.083 15:39:49 -- target/dif.sh@73 -- # cat 00:16:50.083 15:39:49 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:50.083 15:39:49 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:50.083 { 00:16:50.083 "params": { 00:16:50.083 "name": "Nvme$subsystem", 00:16:50.083 "trtype": "$TEST_TRANSPORT", 00:16:50.083 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:50.083 "adrfam": "ipv4", 00:16:50.083 "trsvcid": "$NVMF_PORT", 00:16:50.083 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:50.083 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:50.083 "hdgst": ${hdgst:-false}, 00:16:50.083 "ddgst": ${ddgst:-false} 00:16:50.083 }, 00:16:50.083 "method": "bdev_nvme_attach_controller" 00:16:50.083 } 00:16:50.083 EOF 00:16:50.083 )") 00:16:50.083 15:39:49 -- nvmf/common.sh@543 -- # cat 00:16:50.083 15:39:49 -- target/dif.sh@72 -- # (( file++ )) 00:16:50.083 15:39:49 -- target/dif.sh@72 -- # (( file <= files )) 00:16:50.083 15:39:49 -- nvmf/common.sh@545 -- # jq . 00:16:50.083 15:39:49 -- nvmf/common.sh@546 -- # IFS=, 00:16:50.083 15:39:49 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:50.083 "params": { 00:16:50.083 "name": "Nvme0", 00:16:50.083 "trtype": "tcp", 00:16:50.083 "traddr": "10.0.0.2", 00:16:50.083 "adrfam": "ipv4", 00:16:50.083 "trsvcid": "4420", 00:16:50.083 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:50.083 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:50.083 "hdgst": false, 00:16:50.083 "ddgst": false 00:16:50.083 }, 00:16:50.083 "method": "bdev_nvme_attach_controller" 00:16:50.083 },{ 00:16:50.083 "params": { 00:16:50.083 "name": "Nvme1", 00:16:50.083 "trtype": "tcp", 00:16:50.083 "traddr": "10.0.0.2", 00:16:50.083 "adrfam": "ipv4", 00:16:50.083 "trsvcid": "4420", 00:16:50.083 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:50.083 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:50.083 "hdgst": false, 00:16:50.083 "ddgst": false 00:16:50.083 }, 00:16:50.083 "method": "bdev_nvme_attach_controller" 00:16:50.083 }' 00:16:50.083 15:39:49 -- common/autotest_common.sh@1331 -- # asan_lib= 00:16:50.083 15:39:49 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:16:50.083 15:39:49 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:16:50.083 15:39:49 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:50.083 15:39:49 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:16:50.083 15:39:49 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:16:50.083 15:39:49 -- common/autotest_common.sh@1331 -- # asan_lib= 00:16:50.083 15:39:49 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:16:50.083 15:39:49 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:50.083 15:39:49 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:16:50.083 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:16:50.083 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:16:50.083 fio-3.35 00:16:50.083 Starting 2 threads 00:16:50.083 [2024-04-17 15:39:50.560312] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:50.083 [2024-04-17 15:39:50.560419] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:17:00.060 00:17:00.060 filename0: (groupid=0, jobs=1): err= 0: pid=79411: Wed Apr 17 15:40:00 2024 00:17:00.060 read: IOPS=4714, BW=18.4MiB/s (19.3MB/s)(184MiB/10001msec) 00:17:00.060 slat (nsec): min=5841, max=74414, avg=13200.65, stdev=3984.29 00:17:00.060 clat (usec): min=624, max=4647, avg=812.86, stdev=56.27 00:17:00.060 lat (usec): min=631, max=4685, avg=826.06, stdev=56.98 00:17:00.060 clat percentiles (usec): 00:17:00.060 | 1.00th=[ 709], 5.00th=[ 742], 10.00th=[ 758], 20.00th=[ 783], 00:17:00.060 | 30.00th=[ 791], 40.00th=[ 807], 50.00th=[ 816], 60.00th=[ 824], 00:17:00.060 | 70.00th=[ 832], 80.00th=[ 848], 90.00th=[ 865], 95.00th=[ 881], 00:17:00.060 | 99.00th=[ 914], 99.50th=[ 930], 99.90th=[ 955], 99.95th=[ 1336], 00:17:00.060 | 99.99th=[ 1500] 00:17:00.060 bw ( KiB/s): min=18656, max=19168, per=50.06%, avg=18881.37, stdev=132.67, samples=19 00:17:00.060 iops : min= 4664, max= 4792, avg=4720.32, stdev=33.17, samples=19 00:17:00.060 lat (usec) : 750=7.63%, 1000=92.32% 00:17:00.060 lat (msec) : 2=0.04%, 10=0.01% 00:17:00.060 cpu : usr=90.33%, sys=8.33%, ctx=9, majf=0, minf=9 00:17:00.060 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:00.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:00.060 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:00.060 issued rwts: total=47152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:00.060 latency : target=0, window=0, percentile=100.00%, depth=4 00:17:00.060 filename1: (groupid=0, jobs=1): err= 0: pid=79412: Wed Apr 17 15:40:00 2024 00:17:00.060 read: IOPS=4714, BW=18.4MiB/s (19.3MB/s)(184MiB/10001msec) 00:17:00.060 slat (nsec): min=6677, max=67994, avg=13469.11, stdev=4021.64 00:17:00.060 clat (usec): min=672, max=4453, avg=811.48, stdev=49.73 00:17:00.060 lat (usec): min=679, max=4472, avg=824.95, stdev=49.97 00:17:00.060 clat percentiles (usec): 00:17:00.060 | 1.00th=[ 734], 5.00th=[ 758], 10.00th=[ 766], 20.00th=[ 783], 00:17:00.060 | 30.00th=[ 791], 40.00th=[ 799], 50.00th=[ 807], 60.00th=[ 816], 00:17:00.060 | 70.00th=[ 824], 80.00th=[ 840], 90.00th=[ 857], 95.00th=[ 873], 00:17:00.060 | 99.00th=[ 906], 99.50th=[ 914], 99.90th=[ 955], 99.95th=[ 1336], 00:17:00.060 | 99.99th=[ 1483] 00:17:00.060 bw ( KiB/s): min=18656, max=19168, per=50.06%, avg=18883.37, stdev=132.32, samples=19 00:17:00.060 iops : min= 4664, max= 4792, avg=4720.84, stdev=33.08, samples=19 00:17:00.060 lat (usec) : 750=2.89%, 1000=97.05% 00:17:00.060 lat (msec) : 2=0.04%, 10=0.01% 00:17:00.060 cpu : usr=90.00%, sys=8.64%, ctx=13, majf=0, minf=0 00:17:00.060 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:00.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:00.060 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:00.060 issued rwts: total=47152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:00.060 latency : target=0, window=0, percentile=100.00%, depth=4 00:17:00.060 00:17:00.060 Run status group 0 (all jobs): 00:17:00.060 READ: bw=36.8MiB/s (38.6MB/s), 18.4MiB/s-18.4MiB/s (19.3MB/s-19.3MB/s), io=368MiB (386MB), run=10001-10001msec 00:17:00.060 15:40:00 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:17:00.060 15:40:00 -- target/dif.sh@43 -- # local sub 00:17:00.060 15:40:00 -- target/dif.sh@45 -- # for sub in "$@" 00:17:00.060 15:40:00 -- target/dif.sh@46 -- # destroy_subsystem 0 00:17:00.060 15:40:00 -- target/dif.sh@36 -- # local sub_id=0 00:17:00.060 15:40:00 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:00.060 15:40:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:00.060 15:40:00 -- common/autotest_common.sh@10 -- # set +x 00:17:00.060 15:40:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:00.060 15:40:00 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:17:00.060 15:40:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:00.060 15:40:00 -- common/autotest_common.sh@10 -- # set +x 00:17:00.060 15:40:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:00.060 15:40:00 -- target/dif.sh@45 -- # for sub in "$@" 00:17:00.060 15:40:00 -- target/dif.sh@46 -- # destroy_subsystem 1 00:17:00.060 15:40:00 -- target/dif.sh@36 -- # local sub_id=1 00:17:00.060 15:40:00 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:00.060 15:40:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:00.060 15:40:00 -- common/autotest_common.sh@10 -- # set +x 00:17:00.060 15:40:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:00.060 15:40:01 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:17:00.060 15:40:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:00.060 15:40:01 -- common/autotest_common.sh@10 -- # set +x 00:17:00.060 15:40:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:00.060 00:17:00.060 real 0m11.241s 00:17:00.060 user 0m18.833s 00:17:00.060 sys 0m2.035s 00:17:00.060 15:40:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:00.060 ************************************ 00:17:00.060 END TEST fio_dif_1_multi_subsystems 00:17:00.060 15:40:01 -- common/autotest_common.sh@10 -- # set +x 00:17:00.060 ************************************ 00:17:00.060 15:40:01 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:17:00.060 15:40:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:00.060 15:40:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:00.060 15:40:01 -- common/autotest_common.sh@10 -- # set +x 00:17:00.060 ************************************ 00:17:00.060 START TEST fio_dif_rand_params 00:17:00.060 ************************************ 00:17:00.060 15:40:01 -- common/autotest_common.sh@1111 -- # fio_dif_rand_params 00:17:00.060 15:40:01 -- target/dif.sh@100 -- # local NULL_DIF 00:17:00.060 15:40:01 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:17:00.060 15:40:01 -- target/dif.sh@103 -- # NULL_DIF=3 00:17:00.060 15:40:01 -- target/dif.sh@103 -- # bs=128k 00:17:00.060 15:40:01 -- target/dif.sh@103 -- # numjobs=3 00:17:00.060 15:40:01 -- target/dif.sh@103 -- # iodepth=3 00:17:00.060 15:40:01 -- target/dif.sh@103 -- # runtime=5 00:17:00.060 15:40:01 -- target/dif.sh@105 -- # create_subsystems 0 00:17:00.060 15:40:01 -- target/dif.sh@28 -- # local sub 00:17:00.060 15:40:01 -- target/dif.sh@30 -- # for sub in "$@" 00:17:00.060 15:40:01 -- target/dif.sh@31 -- # create_subsystem 0 00:17:00.060 15:40:01 -- target/dif.sh@18 -- # local sub_id=0 00:17:00.060 15:40:01 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:17:00.060 15:40:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:00.060 15:40:01 -- common/autotest_common.sh@10 -- # set +x 00:17:00.060 bdev_null0 00:17:00.060 15:40:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:00.060 15:40:01 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:17:00.060 15:40:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:00.060 15:40:01 -- common/autotest_common.sh@10 -- # set +x 00:17:00.060 15:40:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:00.060 15:40:01 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:17:00.060 15:40:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:00.060 15:40:01 -- common/autotest_common.sh@10 -- # set +x 00:17:00.060 15:40:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:00.060 15:40:01 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:00.060 15:40:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:00.060 15:40:01 -- common/autotest_common.sh@10 -- # set +x 00:17:00.060 [2024-04-17 15:40:01.161430] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:00.060 15:40:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:00.060 15:40:01 -- target/dif.sh@106 -- # fio /dev/fd/62 00:17:00.060 15:40:01 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:17:00.060 15:40:01 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:17:00.060 15:40:01 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:17:00.060 15:40:01 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:17:00.060 15:40:01 -- target/dif.sh@82 -- # gen_fio_conf 00:17:00.060 15:40:01 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:00.060 15:40:01 -- target/dif.sh@54 -- # local file 00:17:00.060 15:40:01 -- common/autotest_common.sh@1325 -- # local sanitizers 00:17:00.060 15:40:01 -- target/dif.sh@56 -- # cat 00:17:00.060 15:40:01 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:00.060 15:40:01 -- common/autotest_common.sh@1327 -- # shift 00:17:00.060 15:40:01 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:17:00.060 15:40:01 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:17:00.060 15:40:01 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:17:00.060 15:40:01 -- nvmf/common.sh@521 -- # config=() 00:17:00.060 15:40:01 -- nvmf/common.sh@521 -- # local subsystem config 00:17:00.060 15:40:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:00.060 15:40:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:00.060 { 00:17:00.060 "params": { 00:17:00.060 "name": "Nvme$subsystem", 00:17:00.060 "trtype": "$TEST_TRANSPORT", 00:17:00.060 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:00.060 "adrfam": "ipv4", 00:17:00.060 "trsvcid": "$NVMF_PORT", 00:17:00.060 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:00.060 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:00.060 "hdgst": ${hdgst:-false}, 00:17:00.060 "ddgst": ${ddgst:-false} 00:17:00.060 }, 00:17:00.060 "method": "bdev_nvme_attach_controller" 00:17:00.060 } 00:17:00.060 EOF 00:17:00.060 )") 00:17:00.060 15:40:01 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:00.060 15:40:01 -- common/autotest_common.sh@1331 -- # grep libasan 00:17:00.060 15:40:01 -- target/dif.sh@72 -- # (( file = 1 )) 00:17:00.060 15:40:01 -- target/dif.sh@72 -- # (( file <= files )) 00:17:00.060 15:40:01 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:17:00.060 15:40:01 -- nvmf/common.sh@543 -- # cat 00:17:00.060 15:40:01 -- nvmf/common.sh@545 -- # jq . 00:17:00.060 15:40:01 -- nvmf/common.sh@546 -- # IFS=, 00:17:00.060 15:40:01 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:00.060 "params": { 00:17:00.060 "name": "Nvme0", 00:17:00.060 "trtype": "tcp", 00:17:00.060 "traddr": "10.0.0.2", 00:17:00.060 "adrfam": "ipv4", 00:17:00.060 "trsvcid": "4420", 00:17:00.060 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:00.060 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:00.060 "hdgst": false, 00:17:00.060 "ddgst": false 00:17:00.060 }, 00:17:00.060 "method": "bdev_nvme_attach_controller" 00:17:00.060 }' 00:17:00.060 15:40:01 -- common/autotest_common.sh@1331 -- # asan_lib= 00:17:00.060 15:40:01 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:17:00.060 15:40:01 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:17:00.060 15:40:01 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:00.060 15:40:01 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:17:00.060 15:40:01 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:17:00.060 15:40:01 -- common/autotest_common.sh@1331 -- # asan_lib= 00:17:00.060 15:40:01 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:17:00.060 15:40:01 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:00.060 15:40:01 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:17:00.060 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:17:00.060 ... 00:17:00.060 fio-3.35 00:17:00.060 Starting 3 threads 00:17:00.626 [2024-04-17 15:40:01.808075] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:17:00.626 [2024-04-17 15:40:01.808190] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:17:05.897 00:17:05.897 filename0: (groupid=0, jobs=1): err= 0: pid=79574: Wed Apr 17 15:40:06 2024 00:17:05.897 read: IOPS=255, BW=32.0MiB/s (33.5MB/s)(160MiB/5005msec) 00:17:05.897 slat (nsec): min=7722, max=54558, avg=16463.15, stdev=5865.37 00:17:05.897 clat (usec): min=11410, max=12404, avg=11682.80, stdev=181.11 00:17:05.897 lat (usec): min=11418, max=12416, avg=11699.26, stdev=182.05 00:17:05.897 clat percentiles (usec): 00:17:05.897 | 1.00th=[11469], 5.00th=[11469], 10.00th=[11469], 20.00th=[11469], 00:17:05.897 | 30.00th=[11600], 40.00th=[11600], 50.00th=[11600], 60.00th=[11731], 00:17:05.897 | 70.00th=[11731], 80.00th=[11863], 90.00th=[11863], 95.00th=[11994], 00:17:05.897 | 99.00th=[12256], 99.50th=[12387], 99.90th=[12387], 99.95th=[12387], 00:17:05.897 | 99.99th=[12387] 00:17:05.897 bw ( KiB/s): min=32191, max=33024, per=33.35%, avg=32760.78, stdev=395.28, samples=9 00:17:05.897 iops : min= 251, max= 258, avg=255.89, stdev= 3.18, samples=9 00:17:05.897 lat (msec) : 20=100.00% 00:17:05.897 cpu : usr=90.33%, sys=9.03%, ctx=31, majf=0, minf=9 00:17:05.897 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:05.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.897 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.897 issued rwts: total=1281,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:05.897 latency : target=0, window=0, percentile=100.00%, depth=3 00:17:05.897 filename0: (groupid=0, jobs=1): err= 0: pid=79575: Wed Apr 17 15:40:06 2024 00:17:05.897 read: IOPS=255, BW=32.0MiB/s (33.5MB/s)(160MiB/5008msec) 00:17:05.897 slat (nsec): min=7284, max=54642, avg=15845.14, stdev=6285.09 00:17:05.897 clat (usec): min=11405, max=14873, avg=11690.85, stdev=236.11 00:17:05.897 lat (usec): min=11413, max=14914, avg=11706.69, stdev=237.10 00:17:05.898 clat percentiles (usec): 00:17:05.898 | 1.00th=[11469], 5.00th=[11469], 10.00th=[11469], 20.00th=[11469], 00:17:05.898 | 30.00th=[11600], 40.00th=[11600], 50.00th=[11600], 60.00th=[11731], 00:17:05.898 | 70.00th=[11731], 80.00th=[11863], 90.00th=[11863], 95.00th=[11994], 00:17:05.898 | 99.00th=[12256], 99.50th=[12256], 99.90th=[14877], 99.95th=[14877], 00:17:05.898 | 99.99th=[14877] 00:17:05.898 bw ( KiB/s): min=32256, max=33024, per=33.35%, avg=32760.67, stdev=379.10, samples=9 00:17:05.898 iops : min= 252, max= 258, avg=255.89, stdev= 2.93, samples=9 00:17:05.898 lat (msec) : 20=100.00% 00:17:05.898 cpu : usr=90.23%, sys=9.15%, ctx=8, majf=0, minf=9 00:17:05.898 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:05.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.898 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.898 issued rwts: total=1281,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:05.898 latency : target=0, window=0, percentile=100.00%, depth=3 00:17:05.898 filename0: (groupid=0, jobs=1): err= 0: pid=79576: Wed Apr 17 15:40:06 2024 00:17:05.898 read: IOPS=255, BW=32.0MiB/s (33.5MB/s)(160MiB/5005msec) 00:17:05.898 slat (nsec): min=7263, max=65122, avg=16182.95, stdev=6391.73 00:17:05.898 clat (usec): min=8952, max=14539, avg=11681.84, stdev=258.69 00:17:05.898 lat (usec): min=8962, max=14561, avg=11698.02, stdev=259.22 00:17:05.898 clat percentiles (usec): 00:17:05.898 | 1.00th=[11469], 5.00th=[11469], 10.00th=[11469], 20.00th=[11469], 00:17:05.898 | 30.00th=[11600], 40.00th=[11600], 50.00th=[11600], 60.00th=[11731], 00:17:05.898 | 70.00th=[11731], 80.00th=[11863], 90.00th=[11863], 95.00th=[11994], 00:17:05.898 | 99.00th=[12256], 99.50th=[12387], 99.90th=[14484], 99.95th=[14484], 00:17:05.898 | 99.99th=[14484] 00:17:05.898 bw ( KiB/s): min=32191, max=33090, per=33.36%, avg=32768.11, stdev=401.34, samples=9 00:17:05.898 iops : min= 251, max= 258, avg=255.89, stdev= 3.18, samples=9 00:17:05.898 lat (msec) : 10=0.23%, 20=99.77% 00:17:05.898 cpu : usr=91.03%, sys=8.33%, ctx=7, majf=0, minf=9 00:17:05.898 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:05.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.898 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.898 issued rwts: total=1281,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:05.898 latency : target=0, window=0, percentile=100.00%, depth=3 00:17:05.898 00:17:05.898 Run status group 0 (all jobs): 00:17:05.898 READ: bw=95.9MiB/s (101MB/s), 32.0MiB/s-32.0MiB/s (33.5MB/s-33.5MB/s), io=480MiB (504MB), run=5005-5008msec 00:17:05.898 15:40:07 -- target/dif.sh@107 -- # destroy_subsystems 0 00:17:05.898 15:40:07 -- target/dif.sh@43 -- # local sub 00:17:05.898 15:40:07 -- target/dif.sh@45 -- # for sub in "$@" 00:17:05.898 15:40:07 -- target/dif.sh@46 -- # destroy_subsystem 0 00:17:05.898 15:40:07 -- target/dif.sh@36 -- # local sub_id=0 00:17:05.898 15:40:07 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:05.898 15:40:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.898 15:40:07 -- common/autotest_common.sh@10 -- # set +x 00:17:05.898 15:40:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.898 15:40:07 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:17:05.898 15:40:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.898 15:40:07 -- common/autotest_common.sh@10 -- # set +x 00:17:05.898 15:40:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.898 15:40:07 -- target/dif.sh@109 -- # NULL_DIF=2 00:17:05.898 15:40:07 -- target/dif.sh@109 -- # bs=4k 00:17:05.898 15:40:07 -- target/dif.sh@109 -- # numjobs=8 00:17:05.898 15:40:07 -- target/dif.sh@109 -- # iodepth=16 00:17:05.898 15:40:07 -- target/dif.sh@109 -- # runtime= 00:17:05.898 15:40:07 -- target/dif.sh@109 -- # files=2 00:17:05.898 15:40:07 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:17:05.898 15:40:07 -- target/dif.sh@28 -- # local sub 00:17:05.898 15:40:07 -- target/dif.sh@30 -- # for sub in "$@" 00:17:05.898 15:40:07 -- target/dif.sh@31 -- # create_subsystem 0 00:17:05.898 15:40:07 -- target/dif.sh@18 -- # local sub_id=0 00:17:05.898 15:40:07 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:17:05.898 15:40:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.898 15:40:07 -- common/autotest_common.sh@10 -- # set +x 00:17:05.898 bdev_null0 00:17:05.898 15:40:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.898 15:40:07 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:17:05.898 15:40:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.898 15:40:07 -- common/autotest_common.sh@10 -- # set +x 00:17:05.898 15:40:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.898 15:40:07 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:17:05.898 15:40:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.898 15:40:07 -- common/autotest_common.sh@10 -- # set +x 00:17:05.898 15:40:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.898 15:40:07 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:05.898 15:40:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.898 15:40:07 -- common/autotest_common.sh@10 -- # set +x 00:17:05.898 [2024-04-17 15:40:07.303239] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:05.898 15:40:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.898 15:40:07 -- target/dif.sh@30 -- # for sub in "$@" 00:17:05.898 15:40:07 -- target/dif.sh@31 -- # create_subsystem 1 00:17:05.898 15:40:07 -- target/dif.sh@18 -- # local sub_id=1 00:17:05.898 15:40:07 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:17:05.898 15:40:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.898 15:40:07 -- common/autotest_common.sh@10 -- # set +x 00:17:05.898 bdev_null1 00:17:05.898 15:40:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.898 15:40:07 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:17:05.898 15:40:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.898 15:40:07 -- common/autotest_common.sh@10 -- # set +x 00:17:05.898 15:40:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.898 15:40:07 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:17:05.898 15:40:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.898 15:40:07 -- common/autotest_common.sh@10 -- # set +x 00:17:05.898 15:40:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.898 15:40:07 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:05.898 15:40:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.898 15:40:07 -- common/autotest_common.sh@10 -- # set +x 00:17:05.898 15:40:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.157 15:40:07 -- target/dif.sh@30 -- # for sub in "$@" 00:17:06.157 15:40:07 -- target/dif.sh@31 -- # create_subsystem 2 00:17:06.157 15:40:07 -- target/dif.sh@18 -- # local sub_id=2 00:17:06.157 15:40:07 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:17:06.157 15:40:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.157 15:40:07 -- common/autotest_common.sh@10 -- # set +x 00:17:06.157 bdev_null2 00:17:06.157 15:40:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.157 15:40:07 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:17:06.157 15:40:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.157 15:40:07 -- common/autotest_common.sh@10 -- # set +x 00:17:06.157 15:40:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.157 15:40:07 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:17:06.157 15:40:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.157 15:40:07 -- common/autotest_common.sh@10 -- # set +x 00:17:06.157 15:40:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.157 15:40:07 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:06.157 15:40:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.157 15:40:07 -- common/autotest_common.sh@10 -- # set +x 00:17:06.157 15:40:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.157 15:40:07 -- target/dif.sh@112 -- # fio /dev/fd/62 00:17:06.157 15:40:07 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:17:06.157 15:40:07 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:17:06.157 15:40:07 -- nvmf/common.sh@521 -- # config=() 00:17:06.157 15:40:07 -- nvmf/common.sh@521 -- # local subsystem config 00:17:06.157 15:40:07 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:06.157 15:40:07 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:06.157 { 00:17:06.157 "params": { 00:17:06.157 "name": "Nvme$subsystem", 00:17:06.157 "trtype": "$TEST_TRANSPORT", 00:17:06.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:06.157 "adrfam": "ipv4", 00:17:06.157 "trsvcid": "$NVMF_PORT", 00:17:06.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:06.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:06.157 "hdgst": ${hdgst:-false}, 00:17:06.157 "ddgst": ${ddgst:-false} 00:17:06.157 }, 00:17:06.157 "method": "bdev_nvme_attach_controller" 00:17:06.157 } 00:17:06.157 EOF 00:17:06.157 )") 00:17:06.157 15:40:07 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:17:06.157 15:40:07 -- target/dif.sh@82 -- # gen_fio_conf 00:17:06.157 15:40:07 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:17:06.157 15:40:07 -- target/dif.sh@54 -- # local file 00:17:06.157 15:40:07 -- target/dif.sh@56 -- # cat 00:17:06.157 15:40:07 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:17:06.157 15:40:07 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:06.157 15:40:07 -- common/autotest_common.sh@1325 -- # local sanitizers 00:17:06.157 15:40:07 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:06.157 15:40:07 -- nvmf/common.sh@543 -- # cat 00:17:06.157 15:40:07 -- common/autotest_common.sh@1327 -- # shift 00:17:06.157 15:40:07 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:17:06.157 15:40:07 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:17:06.157 15:40:07 -- target/dif.sh@72 -- # (( file = 1 )) 00:17:06.157 15:40:07 -- target/dif.sh@72 -- # (( file <= files )) 00:17:06.157 15:40:07 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:17:06.157 15:40:07 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:06.157 15:40:07 -- target/dif.sh@73 -- # cat 00:17:06.157 15:40:07 -- common/autotest_common.sh@1331 -- # grep libasan 00:17:06.157 15:40:07 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:06.157 15:40:07 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:06.157 { 00:17:06.157 "params": { 00:17:06.157 "name": "Nvme$subsystem", 00:17:06.157 "trtype": "$TEST_TRANSPORT", 00:17:06.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:06.157 "adrfam": "ipv4", 00:17:06.157 "trsvcid": "$NVMF_PORT", 00:17:06.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:06.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:06.157 "hdgst": ${hdgst:-false}, 00:17:06.157 "ddgst": ${ddgst:-false} 00:17:06.157 }, 00:17:06.157 "method": "bdev_nvme_attach_controller" 00:17:06.157 } 00:17:06.157 EOF 00:17:06.157 )") 00:17:06.157 15:40:07 -- nvmf/common.sh@543 -- # cat 00:17:06.157 15:40:07 -- target/dif.sh@72 -- # (( file++ )) 00:17:06.157 15:40:07 -- target/dif.sh@72 -- # (( file <= files )) 00:17:06.157 15:40:07 -- target/dif.sh@73 -- # cat 00:17:06.157 15:40:07 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:06.157 15:40:07 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:06.157 { 00:17:06.157 "params": { 00:17:06.157 "name": "Nvme$subsystem", 00:17:06.157 "trtype": "$TEST_TRANSPORT", 00:17:06.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:06.157 "adrfam": "ipv4", 00:17:06.157 "trsvcid": "$NVMF_PORT", 00:17:06.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:06.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:06.157 "hdgst": ${hdgst:-false}, 00:17:06.157 "ddgst": ${ddgst:-false} 00:17:06.157 }, 00:17:06.157 "method": "bdev_nvme_attach_controller" 00:17:06.157 } 00:17:06.157 EOF 00:17:06.157 )") 00:17:06.157 15:40:07 -- target/dif.sh@72 -- # (( file++ )) 00:17:06.157 15:40:07 -- target/dif.sh@72 -- # (( file <= files )) 00:17:06.157 15:40:07 -- nvmf/common.sh@543 -- # cat 00:17:06.157 15:40:07 -- nvmf/common.sh@545 -- # jq . 00:17:06.157 15:40:07 -- nvmf/common.sh@546 -- # IFS=, 00:17:06.157 15:40:07 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:06.157 "params": { 00:17:06.157 "name": "Nvme0", 00:17:06.157 "trtype": "tcp", 00:17:06.157 "traddr": "10.0.0.2", 00:17:06.157 "adrfam": "ipv4", 00:17:06.157 "trsvcid": "4420", 00:17:06.157 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:06.157 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:06.157 "hdgst": false, 00:17:06.157 "ddgst": false 00:17:06.157 }, 00:17:06.157 "method": "bdev_nvme_attach_controller" 00:17:06.157 },{ 00:17:06.157 "params": { 00:17:06.157 "name": "Nvme1", 00:17:06.157 "trtype": "tcp", 00:17:06.157 "traddr": "10.0.0.2", 00:17:06.157 "adrfam": "ipv4", 00:17:06.157 "trsvcid": "4420", 00:17:06.157 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:06.157 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:06.157 "hdgst": false, 00:17:06.157 "ddgst": false 00:17:06.157 }, 00:17:06.157 "method": "bdev_nvme_attach_controller" 00:17:06.157 },{ 00:17:06.157 "params": { 00:17:06.157 "name": "Nvme2", 00:17:06.157 "trtype": "tcp", 00:17:06.157 "traddr": "10.0.0.2", 00:17:06.157 "adrfam": "ipv4", 00:17:06.157 "trsvcid": "4420", 00:17:06.157 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:06.157 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:06.157 "hdgst": false, 00:17:06.157 "ddgst": false 00:17:06.157 }, 00:17:06.157 "method": "bdev_nvme_attach_controller" 00:17:06.157 }' 00:17:06.157 15:40:07 -- common/autotest_common.sh@1331 -- # asan_lib= 00:17:06.157 15:40:07 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:17:06.157 15:40:07 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:17:06.157 15:40:07 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:06.157 15:40:07 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:17:06.157 15:40:07 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:17:06.157 15:40:07 -- common/autotest_common.sh@1331 -- # asan_lib= 00:17:06.157 15:40:07 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:17:06.157 15:40:07 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:06.157 15:40:07 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:17:06.157 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:17:06.157 ... 00:17:06.157 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:17:06.157 ... 00:17:06.157 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:17:06.157 ... 00:17:06.157 fio-3.35 00:17:06.157 Starting 24 threads 00:17:06.799 [2024-04-17 15:40:08.168438] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:17:06.799 [2024-04-17 15:40:08.168544] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:17:19.036 [2024-04-17 15:40:18.602191] nvme_tcp.c:2402:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x1edd380 via correct icresp 00:17:19.036 [2024-04-17 15:40:18.602944] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1edd380 00:17:19.036 fio: io_u error on file Nvme2n1: Input/output error: read offset=38592512, buflen=4096 00:17:19.036 fio: io_u error on file Nvme2n1: Input/output error: read offset=49111040, buflen=4096 00:17:19.036 fio: io_u error on file Nvme2n1: Input/output error: read offset=16941056, buflen=4096 00:17:19.036 fio: io_u error on file Nvme2n1: Input/output error: read offset=34516992, buflen=4096 00:17:19.036 fio: io_u error on file Nvme2n1: Input/output error: read offset=16191488, buflen=4096 00:17:19.036 fio: io_u error on file Nvme2n1: Input/output error: read offset=10518528, buflen=4096 00:17:19.036 fio: io_u error on file Nvme2n1: Input/output error: read offset=50724864, buflen=4096 00:17:19.036 fio: io_u error on file Nvme2n1: Input/output error: read offset=65597440, buflen=4096 00:17:19.036 fio: pid=79700, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:17:19.036 fio: io_u error on file Nvme2n1: Input/output error: read offset=49045504, buflen=4096 00:17:19.036 fio: io_u error on file Nvme2n1: Input/output error: read offset=24788992, buflen=4096 00:17:19.036 fio: io_u error on file Nvme2n1: Input/output error: read offset=33144832, buflen=4096 00:17:19.036 fio: io_u error on file Nvme2n1: Input/output error: read offset=9060352, buflen=4096 00:17:19.036 fio: io_u error on file Nvme2n1: Input/output error: read offset=38088704, buflen=4096 00:17:19.036 fio: io_u error on file Nvme2n1: Input/output error: read offset=60559360, buflen=4096 00:17:19.036 fio: io_u error on file Nvme2n1: Input/output error: read offset=40730624, buflen=4096 00:17:19.036 fio: io_u error on file Nvme2n1: Input/output error: read offset=9224192, buflen=4096 00:17:19.036 fio: pid=79690, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:17:19.036 [2024-04-17 15:40:18.626171] nvme_tcp.c:2402:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x1edd040 via correct icresp 00:17:19.036 [2024-04-17 15:40:18.626209] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1edd040 00:17:19.036 fio: io_u error on file Nvme1n1: Input/output error: read offset=21385216, buflen=4096 00:17:19.036 fio: io_u error on file Nvme1n1: Input/output error: read offset=24121344, buflen=4096 00:17:19.036 fio: io_u error on file Nvme1n1: Input/output error: read offset=65490944, buflen=4096 00:17:19.036 fio: io_u error on file Nvme1n1: Input/output error: read offset=54857728, buflen=4096 00:17:19.036 fio: io_u error on file Nvme1n1: Input/output error: read offset=65241088, buflen=4096 00:17:19.036 fio: io_u error on file Nvme1n1: Input/output error: read offset=26177536, buflen=4096 00:17:19.036 fio: io_u error on file Nvme1n1: Input/output error: read offset=14835712, buflen=4096 00:17:19.036 fio: io_u error on file Nvme1n1: Input/output error: read offset=1720320, buflen=4096 00:17:19.036 fio: io_u error on file Nvme1n1: Input/output error: read offset=40833024, buflen=4096 00:17:19.036 fio: io_u error on file Nvme1n1: Input/output error: read offset=2764800, buflen=4096 00:17:19.036 fio: io_u error on file Nvme1n1: Input/output error: read offset=59269120, buflen=4096 00:17:19.036 fio: io_u error on file Nvme1n1: Input/output error: read offset=65015808, buflen=4096 00:17:19.036 fio: io_u error on file Nvme1n1: Input/output error: read offset=8192, buflen=4096 00:17:19.036 fio: io_u error on file Nvme1n1: Input/output error: read offset=37806080, buflen=4096 00:17:19.036 fio: io_u error on file Nvme1n1: Input/output error: read offset=15462400, buflen=4096 00:17:19.036 fio: io_u error on file Nvme1n1: Input/output error: read offset=17604608, buflen=4096 00:17:19.036 [2024-04-17 15:40:18.636135] nvme_tcp.c:2402:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x253e9c0 via correct icresp 00:17:19.036 [2024-04-17 15:40:18.636175] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x253e9c0 00:17:19.036 fio: io_u error on file Nvme2n1: Input/output error: read offset=3481600, buflen=4096 00:17:19.036 fio: io_u error on file Nvme2n1: Input/output error: read offset=56184832, buflen=4096 00:17:19.036 fio: io_u error on file Nvme2n1: Input/output error: read offset=66342912, buflen=4096 00:17:19.036 fio: io_u error on file Nvme2n1: Input/output error: read offset=24215552, buflen=4096 00:17:19.036 fio: io_u error on file Nvme2n1: Input/output error: read offset=43143168, buflen=4096 00:17:19.036 fio: io_u error on file Nvme2n1: Input/output error: read offset=66883584, buflen=4096 00:17:19.036 fio: io_u error on file Nvme2n1: Input/output error: read offset=51359744, buflen=4096 00:17:19.036 fio: io_u error on file Nvme2n1: Input/output error: read offset=10039296, buflen=4096 00:17:19.036 fio: pid=79695, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:17:19.036 fio: io_u error on file Nvme2n1: Input/output error: read offset=26849280, buflen=4096 00:17:19.036 fio: io_u error on file Nvme2n1: Input/output error: read offset=11550720, buflen=4096 00:17:19.036 fio: io_u error on file Nvme2n1: Input/output error: read offset=26439680, buflen=4096 00:17:19.036 fio: io_u error on file Nvme2n1: Input/output error: read offset=22265856, buflen=4096 00:17:19.036 fio: io_u error on file Nvme2n1: Input/output error: read offset=16035840, buflen=4096 00:17:19.036 fio: io_u error on file Nvme2n1: Input/output error: read offset=34263040, buflen=4096 00:17:19.036 fio: io_u error on file Nvme2n1: Input/output error: read offset=64954368, buflen=4096 00:17:19.036 fio: io_u error on file Nvme2n1: Input/output error: read offset=65003520, buflen=4096 00:17:19.036 [2024-04-17 15:40:18.643172] nvme_tcp.c:2402:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x253f6c0 via correct icresp 00:17:19.036 [2024-04-17 15:40:18.643295] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x253f6c0 00:17:19.036 fio: io_u error on file Nvme0n1: Input/output error: read offset=51261440, buflen=4096 00:17:19.036 fio: io_u error on file Nvme0n1: Input/output error: read offset=23396352, buflen=4096 00:17:19.036 fio: io_u error on file Nvme0n1: Input/output error: read offset=57282560, buflen=4096 00:17:19.036 fio: io_u error on file Nvme0n1: Input/output error: read offset=21434368, buflen=4096 00:17:19.036 fio: pid=79681, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:17:19.036 fio: io_u error on file Nvme0n1: Input/output error: read offset=10530816, buflen=4096 00:17:19.036 fio: io_u error on file Nvme0n1: Input/output error: read offset=2691072, buflen=4096 00:17:19.036 fio: io_u error on file Nvme0n1: Input/output error: read offset=15687680, buflen=4096 00:17:19.036 fio: io_u error on file Nvme0n1: Input/output error: read offset=49655808, buflen=4096 00:17:19.036 fio: io_u error on file Nvme0n1: Input/output error: read offset=23162880, buflen=4096 00:17:19.036 fio: io_u error on file Nvme0n1: Input/output error: read offset=65122304, buflen=4096 00:17:19.036 fio: io_u error on file Nvme0n1: Input/output error: read offset=3477504, buflen=4096 00:17:19.036 fio: io_u error on file Nvme0n1: Input/output error: read offset=5808128, buflen=4096 00:17:19.036 fio: io_u error on file Nvme0n1: Input/output error: read offset=32575488, buflen=4096 00:17:19.036 fio: io_u error on file Nvme0n1: Input/output error: read offset=36069376, buflen=4096 00:17:19.036 fio: io_u error on file Nvme0n1: Input/output error: read offset=32911360, buflen=4096 00:17:19.036 fio: io_u error on file Nvme0n1: Input/output error: read offset=9801728, buflen=4096 00:17:19.036 fio: io_u error on file Nvme0n1: Input/output error: read offset=17149952, buflen=4096 00:17:19.036 fio: io_u error on file Nvme0n1: Input/output error: read offset=8400896, buflen=4096 00:17:19.036 fio: io_u error on file Nvme0n1: Input/output error: read offset=53698560, buflen=4096 00:17:19.036 fio: io_u error on file Nvme0n1: Input/output error: read offset=1536000, buflen=4096 00:17:19.036 fio: io_u error on file Nvme0n1: Input/output error: read offset=23891968, buflen=4096 00:17:19.036 fio: io_u error on file Nvme0n1: Input/output error: read offset=49221632, buflen=4096 00:17:19.036 fio: io_u error on file Nvme0n1: Input/output error: read offset=13709312, buflen=4096 00:17:19.036 fio: io_u error on file Nvme0n1: Input/output error: read offset=43323392, buflen=4096 00:17:19.036 fio: io_u error on file Nvme0n1: Input/output error: read offset=26206208, buflen=4096 00:17:19.037 fio: io_u error on file Nvme0n1: Input/output error: read offset=66519040, buflen=4096 00:17:19.037 fio: io_u error on file Nvme0n1: Input/output error: read offset=53985280, buflen=4096 00:17:19.037 fio: io_u error on file Nvme0n1: Input/output error: read offset=5378048, buflen=4096 00:17:19.037 fio: io_u error on file Nvme0n1: Input/output error: read offset=2535424, buflen=4096 00:17:19.037 fio: pid=79677, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:17:19.037 fio: io_u error on file Nvme0n1: Input/output error: read offset=48545792, buflen=4096 00:17:19.037 fio: io_u error on file Nvme0n1: Input/output error: read offset=16453632, buflen=4096 00:17:19.037 fio: io_u error on file Nvme0n1: Input/output error: read offset=57090048, buflen=4096 00:17:19.037 [2024-04-17 15:40:18.655213] nvme_tcp.c:2402:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x1eddba0 via correct icresp 00:17:19.037 [2024-04-17 15:40:18.655377] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1eddba0 00:17:19.037 fio: io_u error on file Nvme1n1: Input/output error: read offset=28934144, buflen=4096 00:17:19.037 fio: io_u error on file Nvme1n1: Input/output error: read offset=47759360, buflen=4096 00:17:19.037 fio: io_u error on file Nvme1n1: Input/output error: read offset=5263360, buflen=4096 00:17:19.037 fio: io_u error on file Nvme1n1: Input/output error: read offset=64847872, buflen=4096 00:17:19.037 fio: io_u error on file Nvme1n1: Input/output error: read offset=737280, buflen=4096 00:17:19.037 fio: io_u error on file Nvme1n1: Input/output error: read offset=36368384, buflen=4096 00:17:19.037 fio: io_u error on file Nvme1n1: Input/output error: read offset=43036672, buflen=4096 00:17:19.037 fio: io_u error on file Nvme1n1: Input/output error: read offset=52785152, buflen=4096 00:17:19.037 fio: io_u error on file Nvme1n1: Input/output error: read offset=36683776, buflen=4096 00:17:19.037 fio: io_u error on file Nvme1n1: Input/output error: read offset=37040128, buflen=4096 00:17:19.037 fio: io_u error on file Nvme1n1: Input/output error: read offset=48943104, buflen=4096 00:17:19.037 fio: io_u error on file Nvme1n1: Input/output error: read offset=40386560, buflen=4096 00:17:19.037 fio: io_u error on file Nvme1n1: Input/output error: read offset=24539136, buflen=4096 00:17:19.037 fio: io_u error on file Nvme1n1: Input/output error: read offset=50425856, buflen=4096 00:17:19.037 fio: io_u error on file Nvme1n1: Input/output error: read offset=10510336, buflen=4096 00:17:19.037 fio: io_u error on file Nvme1n1: Input/output error: read offset=27250688, buflen=4096 00:17:19.037 fio: pid=79691, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:17:19.037 [2024-04-17 15:40:18.659254] nvme_tcp.c:2402:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x253e680 via correct icresp 00:17:19.037 [2024-04-17 15:40:18.659266] nvme_tcp.c:2402:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x253ed00 via correct icresp 00:17:19.037 [2024-04-17 15:40:18.659294] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x253e680 00:17:19.037 [2024-04-17 15:40:18.659316] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x253ed00 00:17:19.037 fio: io_u error on file Nvme0n1: Input/output error: read offset=59994112, buflen=4096 00:17:19.037 fio: io_u error on file Nvme0n1: Input/output error: read offset=62394368, buflen=4096 00:17:19.037 fio: io_u error on file Nvme0n1: Input/output error: read offset=12816384, buflen=4096 00:17:19.037 fio: io_u error on file Nvme0n1: Input/output error: read offset=16596992, buflen=4096 00:17:19.037 fio: pid=79679, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:17:19.037 fio: io_u error on file Nvme0n1: Input/output error: read offset=20307968, buflen=4096 00:17:19.037 fio: io_u error on file Nvme0n1: Input/output error: read offset=40996864, buflen=4096 00:17:19.037 fio: io_u error on file Nvme0n1: Input/output error: read offset=52510720, buflen=4096 00:17:19.037 fio: io_u error on file Nvme0n1: Input/output error: read offset=36761600, buflen=4096 00:17:19.037 fio: io_u error on file Nvme2n1: Input/output error: read offset=19333120, buflen=4096 00:17:19.037 fio: io_u error on file Nvme2n1: Input/output error: read offset=56950784, buflen=4096 00:17:19.037 fio: io_u error on file Nvme0n1: Input/output error: read offset=24530944, buflen=4096 00:17:19.037 fio: io_u error on file Nvme2n1: Input/output error: read offset=35749888, buflen=4096 00:17:19.037 fio: io_u error on file Nvme0n1: Input/output error: read offset=22044672, buflen=4096 00:17:19.037 fio: io_u error on file Nvme2n1: Input/output error: read offset=24522752, buflen=4096 00:17:19.037 fio: io_u error on file Nvme0n1: Input/output error: read offset=32223232, buflen=4096 00:17:19.037 fio: io_u error on file Nvme2n1: Input/output error: read offset=45363200, buflen=4096 00:17:19.037 fio: io_u error on file Nvme0n1: Input/output error: read offset=54571008, buflen=4096 00:17:19.037 fio: io_u error on file Nvme2n1: Input/output error: read offset=46178304, buflen=4096 00:17:19.037 fio: io_u error on file Nvme0n1: Input/output error: read offset=2121728, buflen=4096 00:17:19.037 fio: io_u error on file Nvme2n1: Input/output error: read offset=12845056, buflen=4096 00:17:19.037 fio: io_u error on file Nvme0n1: Input/output error: read offset=12918784, buflen=4096 00:17:19.037 fio: io_u error on file Nvme0n1: Input/output error: read offset=28672, buflen=4096 00:17:19.037 fio: io_u error on file Nvme0n1: Input/output error: read offset=18472960, buflen=4096 00:17:19.037 fio: io_u error on file Nvme2n1: Input/output error: read offset=66637824, buflen=4096 00:17:19.037 fio: io_u error on file Nvme2n1: Input/output error: read offset=15638528, buflen=4096 00:17:19.037 fio: io_u error on file Nvme2n1: Input/output error: read offset=1110016, buflen=4096 00:17:19.037 fio: io_u error on file Nvme2n1: Input/output error: read offset=49389568, buflen=4096 00:17:19.037 fio: io_u error on file Nvme2n1: Input/output error: read offset=64516096, buflen=4096 00:17:19.037 fio: io_u error on file Nvme2n1: Input/output error: read offset=56307712, buflen=4096 00:17:19.037 fio: io_u error on file Nvme2n1: Input/output error: read offset=49569792, buflen=4096 00:17:19.037 fio: io_u error on file Nvme2n1: Input/output error: read offset=48312320, buflen=4096 00:17:19.037 fio: io_u error on file Nvme2n1: Input/output error: read offset=19070976, buflen=4096 00:17:19.037 fio: io_u error on file Nvme2n1: Input/output error: read offset=40300544, buflen=4096 00:17:19.037 fio: io_u error on file Nvme2n1: Input/output error: read offset=942080, buflen=4096 00:17:19.037 fio: io_u error on file Nvme2n1: Input/output error: read offset=15732736, buflen=4096 00:17:19.037 fio: io_u error on file Nvme2n1: Input/output error: read offset=24481792, buflen=4096 00:17:19.037 fio: io_u error on file Nvme2n1: Input/output error: read offset=34430976, buflen=4096 00:17:19.037 fio: pid=79697, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:17:19.037 fio: io_u error on file Nvme2n1: Input/output error: read offset=7131136, buflen=4096 00:17:19.037 fio: io_u error on file Nvme2n1: Input/output error: read offset=21241856, buflen=4096 00:17:19.037 [2024-04-17 15:40:18.660200] nvme_tcp.c:2402:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x253fa00 via correct icresp 00:17:19.037 [2024-04-17 15:40:18.660233] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x253fa00 00:17:19.037 fio: io_u error on file Nvme0n1: Input/output error: read offset=23298048, buflen=4096 00:17:19.037 fio: io_u error on file Nvme0n1: Input/output error: read offset=37404672, buflen=4096 00:17:19.037 fio: io_u error on file Nvme0n1: Input/output error: read offset=53178368, buflen=4096 00:17:19.037 fio: io_u error on file Nvme0n1: Input/output error: read offset=782336, buflen=4096 00:17:19.037 fio: io_u error on file Nvme0n1: Input/output error: read offset=39247872, buflen=4096 00:17:19.037 fio: io_u error on file Nvme0n1: Input/output error: read offset=13889536, buflen=4096 00:17:19.037 fio: io_u error on file Nvme0n1: Input/output error: read offset=13529088, buflen=4096 00:17:19.037 fio: io_u error on file Nvme0n1: Input/output error: read offset=66187264, buflen=4096 00:17:19.037 fio: pid=79680, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:17:19.037 fio: io_u error on file Nvme0n1: Input/output error: read offset=42729472, buflen=4096 00:17:19.037 fio: io_u error on file Nvme0n1: Input/output error: read offset=21630976, buflen=4096 00:17:19.037 fio: io_u error on file Nvme0n1: Input/output error: read offset=59961344, buflen=4096 00:17:19.037 fio: io_u error on file Nvme0n1: Input/output error: read offset=8798208, buflen=4096 00:17:19.037 fio: io_u error on file Nvme0n1: Input/output error: read offset=32940032, buflen=4096 00:17:19.037 fio: io_u error on file Nvme0n1: Input/output error: read offset=12214272, buflen=4096 00:17:19.037 fio: io_u error on file Nvme0n1: Input/output error: read offset=58417152, buflen=4096 00:17:19.037 fio: io_u error on file Nvme0n1: Input/output error: read offset=7098368, buflen=4096 00:17:19.037 fio: pid=79694, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:17:19.037 fio: io_u error on file Nvme2n1: Input/output error: read offset=48640000, buflen=4096 00:17:19.037 fio: io_u error on file Nvme2n1: Input/output error: read offset=48943104, buflen=4096 00:17:19.037 fio: io_u error on file Nvme2n1: Input/output error: read offset=19562496, buflen=4096 00:17:19.037 fio: io_u error on file Nvme2n1: Input/output error: read offset=10452992, buflen=4096 00:17:19.037 fio: io_u error on file Nvme2n1: Input/output error: read offset=7839744, buflen=4096 00:17:19.037 fio: io_u error on file Nvme2n1: Input/output error: read offset=52715520, buflen=4096 00:17:19.037 fio: io_u error on file Nvme2n1: Input/output error: read offset=51306496, buflen=4096 00:17:19.037 fio: io_u error on file Nvme2n1: Input/output error: read offset=22798336, buflen=4096 00:17:19.037 fio: io_u error on file Nvme2n1: Input/output error: read offset=41910272, buflen=4096 00:17:19.037 [2024-04-17 15:40:18.661340] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a791f0 is same with the state(5) to be set 00:17:19.037 [2024-04-17 15:40:18.661396] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a791f0 is same with the state(5) to be set 00:17:19.037 [2024-04-17 15:40:18.661408] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a791f0 is same with the state(5) to be set 00:17:19.037 [2024-04-17 15:40:18.661418] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a791f0 is same with the state(5) to be set 00:17:19.037 [2024-04-17 15:40:18.661427] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a791f0 is same with the state(5) to be set 00:17:19.037 [2024-04-17 15:40:18.661436] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a791f0 is same with the state(5) to be set 00:17:19.037 [2024-04-17 15:40:18.661445] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a791f0 is same with the state(5) to be set 00:17:19.037 [2024-04-17 15:40:18.661459] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a791f0 is same with the state(5) to be set 00:17:19.037 [2024-04-17 15:40:18.661468] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a791f0 is same with the state(5) to be set 00:17:19.037 [2024-04-17 15:40:18.661476] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a791f0 is same with the state(5) to be set 00:17:19.037 [2024-04-17 15:40:18.661485] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a791f0 is same with the state(5) to be set 00:17:19.037 [2024-04-17 15:40:18.661494] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a791f0 is same with the state(5) to be set 00:17:19.037 [2024-04-17 15:40:18.661503] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a791f0 is same with the state(5) to be set 00:17:19.037 [2024-04-17 15:40:18.661512] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a791f0 is same with the state(5) to be set 00:17:19.038 [2024-04-17 15:40:18.661521] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a791f0 is same with the state(5) to be set 00:17:19.038 [2024-04-17 15:40:18.667218] nvme_tcp.c:2402:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x3754ea0 via correct icresp 00:17:19.038 [2024-04-17 15:40:18.667220] nvme_tcp.c:2402:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x253f040 via correct icresp 00:17:19.038 [2024-04-17 15:40:18.667277] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x3754ea0 00:17:19.038 [2024-04-17 15:40:18.667298] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x253f040 00:17:19.038 fio: io_u error on file Nvme1n1: Input/output error: read offset=45527040, buflen=4096 00:17:19.038 fio: io_u error on file Nvme1n1: Input/output error: read offset=31428608, buflen=4096 00:17:19.038 fio: io_u error on file Nvme1n1: Input/output error: read offset=38137856, buflen=4096 00:17:19.038 fio: io_u error on file Nvme1n1: Input/output error: read offset=48013312, buflen=4096 00:17:19.038 fio: io_u error on file Nvme1n1: Input/output error: read offset=19353600, buflen=4096 00:17:19.038 fio: io_u error on file Nvme1n1: Input/output error: read offset=60690432, buflen=4096 00:17:19.038 fio: io_u error on file Nvme1n1: Input/output error: read offset=5431296, buflen=4096 00:17:19.038 fio: io_u error on file Nvme1n1: Input/output error: read offset=40865792, buflen=4096 00:17:19.038 fio: io_u error on file Nvme1n1: Input/output error: read offset=28303360, buflen=4096 00:17:19.038 fio: io_u error on file Nvme1n1: Input/output error: read offset=54108160, buflen=4096 00:17:19.038 fio: io_u error on file Nvme1n1: Input/output error: read offset=45944832, buflen=4096 00:17:19.038 fio: io_u error on file Nvme1n1: Input/output error: read offset=36769792, buflen=4096 00:17:19.038 fio: io_u error on file Nvme1n1: Input/output error: read offset=19599360, buflen=4096 00:17:19.038 fio: io_u error on file Nvme1n1: Input/output error: read offset=9601024, buflen=4096 00:17:19.038 fio: io_u error on file Nvme2n1: Input/output error: read offset=56950784, buflen=4096 00:17:19.038 fio: io_u error on file Nvme2n1: Input/output error: read offset=28827648, buflen=4096 00:17:19.038 fio: io_u error on file Nvme2n1: Input/output error: read offset=49283072, buflen=4096 00:17:19.038 fio: io_u error on file Nvme2n1: Input/output error: read offset=65605632, buflen=4096 00:17:19.038 fio: io_u error on file Nvme2n1: Input/output error: read offset=58163200, buflen=4096 00:17:19.038 fio: io_u error on file Nvme2n1: Input/output error: read offset=34717696, buflen=4096 00:17:19.038 fio: io_u error on file Nvme2n1: Input/output error: read offset=34066432, buflen=4096 00:17:19.038 fio: io_u error on file Nvme2n1: Input/output error: read offset=49029120, buflen=4096 00:17:19.038 fio: pid=79693, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:17:19.038 fio: pid=79685, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:17:19.038 fio: io_u error on file Nvme2n1: Input/output error: read offset=50032640, buflen=4096 00:17:19.038 fio: io_u error on file Nvme2n1: Input/output error: read offset=9048064, buflen=4096 00:17:19.038 fio: io_u error on file Nvme2n1: Input/output error: read offset=25878528, buflen=4096 00:17:19.038 fio: io_u error on file Nvme2n1: Input/output error: read offset=53391360, buflen=4096 00:17:19.038 fio: io_u error on file Nvme2n1: Input/output error: read offset=44511232, buflen=4096 00:17:19.038 fio: io_u error on file Nvme2n1: Input/output error: read offset=14729216, buflen=4096 00:17:19.038 fio: io_u error on file Nvme2n1: Input/output error: read offset=33435648, buflen=4096 00:17:19.038 fio: io_u error on file Nvme2n1: Input/output error: read offset=42016768, buflen=4096 00:17:19.038 fio: io_u error on file Nvme1n1: Input/output error: read offset=6950912, buflen=4096 00:17:19.038 fio: io_u error on file Nvme1n1: Input/output error: read offset=58298368, buflen=4096 00:17:19.038 [2024-04-17 15:40:18.667811] nvme_tcp.c:2402:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x253fd40 via correct icresp 00:17:19.038 [2024-04-17 15:40:18.667824] nvme_tcp.c:2402:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x37544e0 via correct icresp 00:17:19.038 [2024-04-17 15:40:18.667833] nvme_tcp.c:2402:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x3754b60 via correct icresp 00:17:19.038 [2024-04-17 15:40:18.667940] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x37544e0 00:17:19.038 [2024-04-17 15:40:18.667968] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x3754b60 00:17:19.038 [2024-04-17 15:40:18.668117] nvme_tcp.c:2402:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x253f380 via correct icresp 00:17:19.038 [2024-04-17 15:40:18.668133] nvme_tcp.c:2402:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x37541a0 via correct icresp 00:17:19.038 [2024-04-17 15:40:18.668161] nvme_tcp.c:2402:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x3754820 via correct icresp 00:17:19.038 [2024-04-17 15:40:18.667888] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x253fd40 00:17:19.038 fio: io_u error on file Nvme0n1: Input/output error: read offset=26419200, buflen=4096 00:17:19.038 fio: io_u error on file Nvme0n1: Input/output error: read offset=56233984, buflen=4096 00:17:19.038 fio: io_u error on file Nvme0n1: Input/output error: read offset=7028736, buflen=4096 00:17:19.038 fio: io_u error on file Nvme0n1: Input/output error: read offset=57393152, buflen=4096 00:17:19.038 fio: io_u error on file Nvme0n1: Input/output error: read offset=66609152, buflen=4096 00:17:19.038 fio: io_u error on file Nvme0n1: Input/output error: read offset=49180672, buflen=4096 00:17:19.038 fio: io_u error on file Nvme0n1: Input/output error: read offset=20025344, buflen=4096 00:17:19.038 fio: io_u error on file Nvme0n1: Input/output error: read offset=44130304, buflen=4096 00:17:19.038 [2024-04-17 15:40:18.668338] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x253f380 00:17:19.038 fio: io_u error on file Nvme0n1: Input/output error: read offset=23453696, buflen=4096 00:17:19.038 fio: io_u error on file Nvme0n1: Input/output error: read offset=34766848, buflen=4096 00:17:19.038 fio: pid=79683, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:17:19.038 fio: io_u error on file Nvme0n1: Input/output error: read offset=36364288, buflen=4096 00:17:19.038 fio: io_u error on file Nvme0n1: Input/output error: read offset=32235520, buflen=4096 00:17:19.038 fio: io_u error on file Nvme0n1: Input/output error: read offset=32780288, buflen=4096 00:17:19.038 fio: io_u error on file Nvme0n1: Input/output error: read offset=54685696, buflen=4096 00:17:19.038 fio: io_u error on file Nvme0n1: Input/output error: read offset=13651968, buflen=4096 00:17:19.038 fio: io_u error on file Nvme0n1: Input/output error: read offset=10285056, buflen=4096 00:17:19.038 [2024-04-17 15:40:18.668362] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x37541a0 00:17:19.038 [2024-04-17 15:40:18.668409] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x105f860 (9): Bad file descriptor 00:17:19.038 fio: pid=79684, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:17:19.038 fio: io_u error on file Nvme0n1: Input/output error: read offset=53342208, buflen=4096 00:17:19.038 fio: io_u error on file Nvme0n1: Input/output error: read offset=39178240, buflen=4096 00:17:19.038 fio: io_u error on file Nvme0n1: Input/output error: read offset=23973888, buflen=4096 00:17:19.038 fio: io_u error on file Nvme0n1: Input/output error: read offset=38977536, buflen=4096 00:17:19.038 fio: io_u error on file Nvme0n1: Input/output error: read offset=12337152, buflen=4096 00:17:19.038 fio: io_u error on file Nvme0n1: Input/output error: read offset=4063232, buflen=4096 00:17:19.038 fio: io_u error on file Nvme0n1: Input/output error: read offset=63217664, buflen=4096 00:17:19.038 fio: io_u error on file Nvme0n1: Input/output error: read offset=3981312, buflen=4096 00:17:19.038 fio: io_u error on file Nvme0n1: Input/output error: read offset=36765696, buflen=4096 00:17:19.038 fio: io_u error on file Nvme0n1: Input/output error: read offset=58126336, buflen=4096 00:17:19.038 fio: io_u error on file Nvme0n1: Input/output error: read offset=41697280, buflen=4096 00:17:19.038 fio: io_u error on file Nvme0n1: Input/output error: read offset=16314368, buflen=4096 00:17:19.038 fio: io_u error on file Nvme0n1: Input/output error: read offset=13213696, buflen=4096 00:17:19.038 fio: io_u error on file Nvme0n1: Input/output error: read offset=12914688, buflen=4096 00:17:19.038 fio: io_u error on file Nvme0n1: Input/output error: read offset=30134272, buflen=4096 00:17:19.038 fio: io_u error on file Nvme0n1: Input/output error: read offset=38567936, buflen=4096 00:17:19.038 fio: io_u error on file Nvme2n1: Input/output error: read offset=44494848, buflen=4096 00:17:19.038 fio: io_u error on file Nvme2n1: Input/output error: read offset=7815168, buflen=4096 00:17:19.038 fio: io_u error on file Nvme2n1: Input/output error: read offset=19873792, buflen=4096 00:17:19.038 fio: io_u error on file Nvme2n1: Input/output error: read offset=6111232, buflen=4096 00:17:19.038 fio: io_u error on file Nvme2n1: Input/output error: read offset=16973824, buflen=4096 00:17:19.038 fio: io_u error on file Nvme2n1: Input/output error: read offset=49205248, buflen=4096 00:17:19.038 fio: io_u error on file Nvme2n1: Input/output error: read offset=31784960, buflen=4096 00:17:19.038 fio: io_u error on file Nvme2n1: Input/output error: read offset=7819264, buflen=4096 00:17:19.038 fio: io_u error on file Nvme2n1: Input/output error: read offset=59662336, buflen=4096 00:17:19.038 fio: io_u error on file Nvme2n1: Input/output error: read offset=34951168, buflen=4096 00:17:19.038 fio: io_u error on file Nvme2n1: Input/output error: read offset=1232896, buflen=4096 00:17:19.038 fio: io_u error on file Nvme2n1: Input/output error: read offset=54398976, buflen=4096 00:17:19.038 fio: io_u error on file Nvme2n1: Input/output error: read offset=61435904, buflen=4096 00:17:19.038 fio: io_u error on file Nvme2n1: Input/output error: read offset=61767680, buflen=4096 00:17:19.038 fio: io_u error on file Nvme2n1: Input/output error: read offset=40538112, buflen=4096 00:17:19.038 fio: io_u error on file Nvme2n1: Input/output error: read offset=51154944, buflen=4096 00:17:19.038 fio: pid=79699, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:17:19.038 [2024-04-17 15:40:18.668859] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x3754820 00:17:19.038 [2024-04-17 15:40:18.669040] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1072000 (9): Bad file descriptor 00:17:19.038 fio: pid=79689, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:17:19.038 fio: io_u error on file Nvme1n1: Input/output error: read offset=53186560, buflen=4096 00:17:19.038 fio: io_u error on file Nvme1n1: Input/output error: read offset=34996224, buflen=4096 00:17:19.038 fio: io_u error on file Nvme1n1: Input/output error: read offset=37572608, buflen=4096 00:17:19.038 fio: io_u error on file Nvme1n1: Input/output error: read offset=19476480, buflen=4096 00:17:19.038 fio: io_u error on file Nvme1n1: Input/output error: read offset=27541504, buflen=4096 00:17:19.038 fio: io_u error on file Nvme1n1: Input/output error: read offset=44208128, buflen=4096 00:17:19.038 fio: io_u error on file Nvme1n1: Input/output error: read offset=36532224, buflen=4096 00:17:19.038 fio: io_u error on file Nvme1n1: Input/output error: read offset=38891520, buflen=4096 00:17:19.038 fio: io_u error on file Nvme1n1: Input/output error: read offset=913408, buflen=4096 00:17:19.038 fio: io_u error on file Nvme1n1: Input/output error: read offset=58646528, buflen=4096 00:17:19.038 fio: io_u error on file Nvme1n1: Input/output error: read offset=18313216, buflen=4096 00:17:19.038 fio: io_u error on file Nvme1n1: Input/output error: read offset=24666112, buflen=4096 00:17:19.038 fio: io_u error on file Nvme1n1: Input/output error: read offset=58052608, buflen=4096 00:17:19.038 fio: io_u error on file Nvme1n1: Input/output error: read offset=48078848, buflen=4096 00:17:19.038 fio: io_u error on file Nvme1n1: Input/output error: read offset=4517888, buflen=4096 00:17:19.038 fio: io_u error on file Nvme1n1: Input/output error: read offset=50819072, buflen=4096 00:17:19.038 fio: pid=79687, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:17:19.038 fio: io_u error on file Nvme1n1: Input/output error: read offset=33353728, buflen=4096 00:17:19.039 fio: io_u error on file Nvme1n1: Input/output error: read offset=62767104, buflen=4096 00:17:19.039 fio: io_u error on file Nvme1n1: Input/output error: read offset=9048064, buflen=4096 00:17:19.039 fio: io_u error on file Nvme1n1: Input/output error: read offset=34938880, buflen=4096 00:17:19.039 fio: io_u error on file Nvme1n1: Input/output error: read offset=31834112, buflen=4096 00:17:19.039 fio: io_u error on file Nvme1n1: Input/output error: read offset=52871168, buflen=4096 00:17:19.039 fio: io_u error on file Nvme1n1: Input/output error: read offset=46837760, buflen=4096 00:17:19.039 fio: io_u error on file Nvme1n1: Input/output error: read offset=10993664, buflen=4096 00:17:19.039 fio: io_u error on file Nvme1n1: Input/output error: read offset=1417216, buflen=4096 00:17:19.039 fio: io_u error on file Nvme1n1: Input/output error: read offset=45432832, buflen=4096 00:17:19.039 fio: io_u error on file Nvme1n1: Input/output error: read offset=51412992, buflen=4096 00:17:19.039 fio: io_u error on file Nvme1n1: Input/output error: read offset=63549440, buflen=4096 00:17:19.039 fio: io_u error on file Nvme1n1: Input/output error: read offset=26009600, buflen=4096 00:17:19.039 fio: io_u error on file Nvme1n1: Input/output error: read offset=61190144, buflen=4096 00:17:19.039 fio: io_u error on file Nvme1n1: Input/output error: read offset=32485376, buflen=4096 00:17:19.039 fio: io_u error on file Nvme1n1: Input/output error: read offset=5083136, buflen=4096 00:17:19.039 fio: io_u error on file Nvme1n1: Input/output error: read offset=2277376, buflen=4096 00:17:19.039 fio: io_u error on file Nvme1n1: Input/output error: read offset=33816576, buflen=4096 00:17:19.039 fio: io_u error on file Nvme1n1: Input/output error: read offset=15503360, buflen=4096 00:17:19.039 fio: io_u error on file Nvme1n1: Input/output error: read offset=18022400, buflen=4096 00:17:19.039 fio: io_u error on file Nvme1n1: Input/output error: read offset=35905536, buflen=4096 00:17:19.039 fio: io_u error on file Nvme1n1: Input/output error: read offset=64307200, buflen=4096 00:17:19.039 fio: io_u error on file Nvme1n1: Input/output error: read offset=35549184, buflen=4096 00:17:19.039 fio: io_u error on file Nvme1n1: Input/output error: read offset=39657472, buflen=4096 00:17:19.039 [2024-04-17 15:40:18.670016] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1072680 (9): Bad file descriptor 00:17:19.039 fio: io_u error on file Nvme1n1: Input/output error: read offset=52142080, buflen=4096 00:17:19.039 fio: io_u error on file Nvme1n1: Input/output error: read offset=34988032, buflen=4096 00:17:19.039 fio: io_u error on file Nvme1n1: Input/output error: read offset=55037952, buflen=4096 00:17:19.039 fio: io_u error on file Nvme1n1: Input/output error: read offset=55644160, buflen=4096 00:17:19.039 fio: pid=79688, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:17:19.039 fio: io_u error on file Nvme1n1: Input/output error: read offset=54173696, buflen=4096 00:17:19.039 fio: io_u error on file Nvme1n1: Input/output error: read offset=22335488, buflen=4096 00:17:19.039 fio: io_u error on file Nvme1n1: Input/output error: read offset=62726144, buflen=4096 00:17:19.039 fio: io_u error on file Nvme1n1: Input/output error: read offset=58773504, buflen=4096 00:17:19.039 00:17:19.039 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=79677: Wed Apr 17 15:40:18 2024 00:17:19.039 read: IOPS=727, BW=2898KiB/s (2967kB/s)(15.5MiB/5494msec) 00:17:19.039 slat (usec): min=4, max=4023, avg=12.91, stdev=89.82 00:17:19.039 clat (usec): min=3515, max=73185, avg=21981.12, stdev=8908.65 00:17:19.039 lat (usec): min=3523, max=73193, avg=21994.04, stdev=8909.12 00:17:19.039 clat percentiles (usec): 00:17:19.039 | 1.00th=[ 6718], 5.00th=[ 7898], 10.00th=[ 8225], 20.00th=[13960], 00:17:19.039 | 30.00th=[17171], 40.00th=[20841], 50.00th=[22938], 60.00th=[23987], 00:17:19.039 | 70.00th=[25035], 80.00th=[29492], 90.00th=[34866], 95.00th=[36439], 00:17:19.039 | 99.00th=[44303], 99.50th=[47449], 99.90th=[56361], 99.95th=[72877], 00:17:19.039 | 99.99th=[72877] 00:17:19.039 bw ( KiB/s): min= 1856, max= 3152, per=12.76%, avg=2624.00, stdev=333.60, samples=10 00:17:19.039 iops : min= 464, max= 788, avg=656.00, stdev=83.40, samples=10 00:17:19.039 lat (msec) : 4=0.10%, 10=13.14%, 20=24.90%, 50=61.26%, 100=0.20% 00:17:19.039 cpu : usr=41.73%, sys=2.64%, ctx=876, majf=0, minf=9 00:17:19.039 IO depths : 1=1.8%, 2=6.2%, 4=19.0%, 8=61.2%, 16=11.7%, 32=0.0%, >=64=0.0% 00:17:19.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.039 complete : 0=0.1%, 4=92.8%, 8=2.6%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.039 issued rwts: total=3996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.039 latency : target=0, window=0, percentile=100.00%, depth=16 00:17:19.039 filename0: (groupid=0, jobs=1): err= 0: pid=79678: Wed Apr 17 15:40:18 2024 00:17:19.039 read: IOPS=742, BW=2969KiB/s (3040kB/s)(29.0MiB/10014msec) 00:17:19.039 slat (usec): min=7, max=8030, avg=20.89, stdev=237.86 00:17:19.039 clat (usec): min=975, max=55822, avg=21397.80, stdev=7378.20 00:17:19.039 lat (usec): min=984, max=55831, avg=21418.69, stdev=7378.20 00:17:19.039 clat percentiles (usec): 00:17:19.039 | 1.00th=[ 5276], 5.00th=[10814], 10.00th=[12125], 20.00th=[14877], 00:17:19.039 | 30.00th=[16909], 40.00th=[20579], 50.00th=[22676], 60.00th=[23725], 00:17:19.039 | 70.00th=[23987], 80.00th=[24773], 90.00th=[31851], 95.00th=[35914], 00:17:19.039 | 99.00th=[41157], 99.50th=[47449], 99.90th=[48497], 99.95th=[49021], 00:17:19.039 | 99.99th=[55837] 00:17:19.039 bw ( KiB/s): min= 2352, max= 4472, per=14.43%, avg=2966.80, stdev=502.03, samples=20 00:17:19.039 iops : min= 588, max= 1118, avg=741.70, stdev=125.51, samples=20 00:17:19.039 lat (usec) : 1000=0.03% 00:17:19.039 lat (msec) : 2=0.03%, 4=0.59%, 10=3.90%, 20=33.66%, 50=61.77% 00:17:19.039 lat (msec) : 100=0.03% 00:17:19.039 cpu : usr=35.54%, sys=2.39%, ctx=1256, majf=0, minf=9 00:17:19.039 IO depths : 1=1.2%, 2=5.9%, 4=20.0%, 8=60.6%, 16=12.2%, 32=0.0%, >=64=0.0% 00:17:19.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.039 complete : 0=0.0%, 4=93.2%, 8=2.1%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.039 issued rwts: total=7433,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.039 latency : target=0, window=0, percentile=100.00%, depth=16 00:17:19.039 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=79679: Wed Apr 17 15:40:18 2024 00:17:19.039 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:17:19.039 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:17:19.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.039 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.039 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.039 latency : target=0, window=0, percentile=100.00%, depth=16 00:17:19.039 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=79680: Wed Apr 17 15:40:18 2024 00:17:19.039 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:17:19.039 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:17:19.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.039 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.039 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.039 latency : target=0, window=0, percentile=100.00%, depth=16 00:17:19.039 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=79681: Wed Apr 17 15:40:18 2024 00:17:19.039 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:17:19.039 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:17:19.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.039 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.039 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.039 latency : target=0, window=0, percentile=100.00%, depth=16 00:17:19.039 filename0: (groupid=0, jobs=1): err= 0: pid=79682: Wed Apr 17 15:40:18 2024 00:17:19.039 read: IOPS=732, BW=2928KiB/s (2999kB/s)(28.6MiB/10003msec) 00:17:19.039 slat (usec): min=3, max=8045, avg=21.07, stdev=244.88 00:17:19.039 clat (usec): min=959, max=57729, avg=21668.96, stdev=7226.97 00:17:19.039 lat (usec): min=968, max=57740, avg=21690.03, stdev=7229.38 00:17:19.039 clat percentiles (usec): 00:17:19.039 | 1.00th=[ 6521], 5.00th=[11600], 10.00th=[12125], 20.00th=[14746], 00:17:19.039 | 30.00th=[17171], 40.00th=[21365], 50.00th=[22676], 60.00th=[23725], 00:17:19.039 | 70.00th=[23987], 80.00th=[25560], 90.00th=[31327], 95.00th=[34866], 00:17:19.039 | 99.00th=[41157], 99.50th=[45351], 99.90th=[48497], 99.95th=[52167], 00:17:19.039 | 99.99th=[57934] 00:17:19.039 bw ( KiB/s): min= 2304, max= 3809, per=14.34%, avg=2946.16, stdev=453.95, samples=19 00:17:19.039 iops : min= 576, max= 952, avg=736.53, stdev=113.46, samples=19 00:17:19.039 lat (usec) : 1000=0.03% 00:17:19.039 lat (msec) : 2=0.14%, 4=0.52%, 10=2.17%, 20=32.40%, 50=64.69% 00:17:19.039 lat (msec) : 100=0.05% 00:17:19.039 cpu : usr=37.26%, sys=2.80%, ctx=1159, majf=0, minf=9 00:17:19.039 IO depths : 1=1.4%, 2=6.9%, 4=22.5%, 8=57.7%, 16=11.5%, 32=0.0%, >=64=0.0% 00:17:19.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.039 complete : 0=0.0%, 4=93.8%, 8=0.9%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.039 issued rwts: total=7323,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.039 latency : target=0, window=0, percentile=100.00%, depth=16 00:17:19.039 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=79683: Wed Apr 17 15:40:18 2024 00:17:19.039 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:17:19.039 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:17:19.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.039 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.039 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.039 latency : target=0, window=0, percentile=100.00%, depth=16 00:17:19.039 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=79684: Wed Apr 17 15:40:18 2024 00:17:19.039 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:17:19.039 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:17:19.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.039 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.039 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.039 latency : target=0, window=0, percentile=100.00%, depth=16 00:17:19.039 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=79685: Wed Apr 17 15:40:18 2024 00:17:19.039 cpu : usr=0.00%, sys=0.00%, ctx=2, majf=0, minf=0 00:17:19.039 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:17:19.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.039 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.040 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.040 latency : target=0, window=0, percentile=100.00%, depth=16 00:17:19.040 filename1: (groupid=0, jobs=1): err= 0: pid=79686: Wed Apr 17 15:40:18 2024 00:17:19.040 read: IOPS=770, BW=3082KiB/s (3156kB/s)(30.1MiB/10014msec) 00:17:19.040 slat (usec): min=7, max=8027, avg=15.65, stdev=160.59 00:17:19.040 clat (usec): min=981, max=63222, avg=20650.25, stdev=7329.10 00:17:19.040 lat (usec): min=990, max=63237, avg=20665.90, stdev=7330.44 00:17:19.040 clat percentiles (usec): 00:17:19.040 | 1.00th=[ 4883], 5.00th=[ 9634], 10.00th=[11994], 20.00th=[14222], 00:17:19.040 | 30.00th=[16319], 40.00th=[19006], 50.00th=[21103], 60.00th=[22414], 00:17:19.040 | 70.00th=[23462], 80.00th=[25297], 90.00th=[30278], 95.00th=[33817], 00:17:19.040 | 99.00th=[41157], 99.50th=[43779], 99.90th=[52167], 99.95th=[54789], 00:17:19.040 | 99.99th=[63177] 00:17:19.040 bw ( KiB/s): min= 2280, max= 3952, per=14.99%, avg=3080.00, stdev=521.83, samples=20 00:17:19.040 iops : min= 570, max= 988, avg=770.00, stdev=130.46, samples=20 00:17:19.040 lat (usec) : 1000=0.03% 00:17:19.040 lat (msec) : 2=0.13%, 4=0.61%, 10=4.61%, 20=38.71%, 50=55.75% 00:17:19.040 lat (msec) : 100=0.16% 00:17:19.040 cpu : usr=38.84%, sys=3.03%, ctx=1421, majf=0, minf=9 00:17:19.040 IO depths : 1=1.0%, 2=6.2%, 4=21.3%, 8=59.4%, 16=12.1%, 32=0.0%, >=64=0.0% 00:17:19.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.040 complete : 0=0.0%, 4=93.5%, 8=1.6%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.040 issued rwts: total=7716,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.040 latency : target=0, window=0, percentile=100.00%, depth=16 00:17:19.040 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=79687: Wed Apr 17 15:40:18 2024 00:17:19.040 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:17:19.040 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:17:19.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.040 complete : 0=0.0%, 4=89.5%, 8=10.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.040 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.040 latency : target=0, window=0, percentile=100.00%, depth=16 00:17:19.040 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=79688: Wed Apr 17 15:40:18 2024 00:17:19.040 cpu : usr=0.00%, sys=0.00%, ctx=1, majf=0, minf=0 00:17:19.040 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:17:19.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.040 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.040 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.040 latency : target=0, window=0, percentile=100.00%, depth=16 00:17:19.040 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=79689: Wed Apr 17 15:40:18 2024 00:17:19.040 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:17:19.040 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:17:19.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.040 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.040 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.040 latency : target=0, window=0, percentile=100.00%, depth=16 00:17:19.040 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=79690: Wed Apr 17 15:40:18 2024 00:17:19.040 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:17:19.040 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:17:19.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.040 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.040 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.040 latency : target=0, window=0, percentile=100.00%, depth=16 00:17:19.040 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=79691: Wed Apr 17 15:40:18 2024 00:17:19.040 cpu : usr=0.00%, sys=0.00%, ctx=16, majf=0, minf=0 00:17:19.040 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:17:19.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.040 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.040 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.040 latency : target=0, window=0, percentile=100.00%, depth=16 00:17:19.040 filename1: (groupid=0, jobs=1): err= 0: pid=79692: Wed Apr 17 15:40:18 2024 00:17:19.040 read: IOPS=761, BW=3045KiB/s (3118kB/s)(29.7MiB/10001msec) 00:17:19.040 slat (usec): min=4, max=8030, avg=16.01, stdev=159.17 00:17:19.040 clat (usec): min=1617, max=58005, avg=20876.62, stdev=7194.55 00:17:19.040 lat (usec): min=1626, max=58017, avg=20892.63, stdev=7195.20 00:17:19.040 clat percentiles (usec): 00:17:19.040 | 1.00th=[ 5866], 5.00th=[ 9241], 10.00th=[12256], 20.00th=[15795], 00:17:19.040 | 30.00th=[16057], 40.00th=[17957], 50.00th=[21365], 60.00th=[23462], 00:17:19.040 | 70.00th=[23987], 80.00th=[24511], 90.00th=[31589], 95.00th=[33817], 00:17:19.040 | 99.00th=[39584], 99.50th=[40633], 99.90th=[47973], 99.95th=[56361], 00:17:19.040 | 99.99th=[57934] 00:17:19.040 bw ( KiB/s): min= 2304, max= 4166, per=14.87%, avg=3055.47, stdev=567.28, samples=19 00:17:19.040 iops : min= 576, max= 1041, avg=763.84, stdev=141.77, samples=19 00:17:19.040 lat (msec) : 2=0.04%, 4=0.56%, 10=5.71%, 20=39.60%, 50=54.03% 00:17:19.040 lat (msec) : 100=0.05% 00:17:19.040 cpu : usr=41.84%, sys=3.28%, ctx=1243, majf=0, minf=9 00:17:19.040 IO depths : 1=1.3%, 2=6.7%, 4=22.2%, 8=58.2%, 16=11.6%, 32=0.0%, >=64=0.0% 00:17:19.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.040 complete : 0=0.0%, 4=93.7%, 8=1.2%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.040 issued rwts: total=7614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.040 latency : target=0, window=0, percentile=100.00%, depth=16 00:17:19.040 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=79693: Wed Apr 17 15:40:18 2024 00:17:19.040 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:17:19.040 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:17:19.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.040 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.040 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.040 latency : target=0, window=0, percentile=100.00%, depth=16 00:17:19.040 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=79694: Wed Apr 17 15:40:18 2024 00:17:19.040 cpu : usr=0.00%, sys=0.00%, ctx=1, majf=0, minf=0 00:17:19.040 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:17:19.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.040 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.040 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.040 latency : target=0, window=0, percentile=100.00%, depth=16 00:17:19.040 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=79695: Wed Apr 17 15:40:18 2024 00:17:19.040 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:17:19.040 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:17:19.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.040 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.040 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.040 latency : target=0, window=0, percentile=100.00%, depth=16 00:17:19.040 filename2: (groupid=0, jobs=1): err= 0: pid=79696: Wed Apr 17 15:40:18 2024 00:17:19.040 read: IOPS=757, BW=3031KiB/s (3103kB/s)(29.6MiB/10003msec) 00:17:19.040 slat (usec): min=5, max=8025, avg=19.21, stdev=216.01 00:17:19.040 clat (usec): min=984, max=60366, avg=20967.56, stdev=7649.13 00:17:19.040 lat (usec): min=993, max=60377, avg=20986.76, stdev=7647.85 00:17:19.040 clat percentiles (usec): 00:17:19.040 | 1.00th=[ 5145], 5.00th=[10028], 10.00th=[12911], 20.00th=[15664], 00:17:19.040 | 30.00th=[16057], 40.00th=[17695], 50.00th=[20579], 60.00th=[22676], 00:17:19.040 | 70.00th=[23725], 80.00th=[24773], 90.00th=[31851], 95.00th=[35914], 00:17:19.040 | 99.00th=[43254], 99.50th=[47449], 99.90th=[55837], 99.95th=[59507], 00:17:19.040 | 99.99th=[60556] 00:17:19.040 bw ( KiB/s): min= 1984, max= 3936, per=14.73%, avg=3027.05, stdev=519.90, samples=19 00:17:19.040 iops : min= 496, max= 984, avg=756.74, stdev=129.96, samples=19 00:17:19.040 lat (usec) : 1000=0.04% 00:17:19.040 lat (msec) : 2=0.09%, 4=0.67%, 10=4.10%, 20=43.62%, 50=51.34% 00:17:19.040 lat (msec) : 100=0.13% 00:17:19.040 cpu : usr=40.49%, sys=3.17%, ctx=1352, majf=0, minf=9 00:17:19.040 IO depths : 1=1.1%, 2=6.4%, 4=21.8%, 8=58.8%, 16=12.0%, 32=0.0%, >=64=0.0% 00:17:19.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.040 complete : 0=0.0%, 4=93.6%, 8=1.3%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.040 issued rwts: total=7579,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.041 latency : target=0, window=0, percentile=100.00%, depth=16 00:17:19.041 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=79697: Wed Apr 17 15:40:18 2024 00:17:19.041 read: IOPS=748, BW=2976KiB/s (3048kB/s)(9.80MiB/3372msec) 00:17:19.041 slat (usec): min=4, max=8039, avg=26.79, stdev=356.33 00:17:19.041 clat (usec): min=1554, max=59170, avg=21296.97, stdev=9068.24 00:17:19.041 lat (usec): min=1567, max=59189, avg=21323.86, stdev=9072.50 00:17:19.041 clat percentiles (usec): 00:17:19.041 | 1.00th=[ 1713], 5.00th=[ 2671], 10.00th=[10814], 20.00th=[11994], 00:17:19.041 | 30.00th=[16319], 40.00th=[22938], 50.00th=[23725], 60.00th=[23987], 00:17:19.041 | 70.00th=[23987], 80.00th=[24773], 90.00th=[34866], 95.00th=[35914], 00:17:19.041 | 99.00th=[45876], 99.50th=[45876], 99.90th=[56361], 99.95th=[58983], 00:17:19.041 | 99.99th=[58983] 00:17:19.041 bw ( KiB/s): min= 2416, max= 2712, per=12.33%, avg=2534.67, stdev=109.04, samples=6 00:17:19.041 iops : min= 604, max= 678, avg=633.67, stdev=27.26, samples=6 00:17:19.041 lat (msec) : 2=3.80%, 4=3.49%, 10=1.35%, 20=21.47%, 50=69.11% 00:17:19.041 lat (msec) : 100=0.16% 00:17:19.041 cpu : usr=31.21%, sys=2.22%, ctx=334, majf=0, minf=9 00:17:19.041 IO depths : 1=2.0%, 2=6.9%, 4=20.9%, 8=59.0%, 16=11.2%, 32=0.0%, >=64=0.0% 00:17:19.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.041 complete : 0=0.1%, 4=93.3%, 8=1.7%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.041 issued rwts: total=2525,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.041 latency : target=0, window=0, percentile=100.00%, depth=16 00:17:19.041 filename2: (groupid=0, jobs=1): err= 0: pid=79698: Wed Apr 17 15:40:18 2024 00:17:19.041 read: IOPS=731, BW=2927KiB/s (2997kB/s)(28.6MiB/10022msec) 00:17:19.041 slat (usec): min=7, max=8023, avg=17.83, stdev=225.41 00:17:19.041 clat (usec): min=887, max=59053, avg=21746.87, stdev=7375.50 00:17:19.041 lat (usec): min=895, max=59076, avg=21764.70, stdev=7378.65 00:17:19.041 clat percentiles (usec): 00:17:19.041 | 1.00th=[ 7701], 5.00th=[11469], 10.00th=[12125], 20.00th=[14484], 00:17:19.041 | 30.00th=[16188], 40.00th=[21890], 50.00th=[23462], 60.00th=[23987], 00:17:19.041 | 70.00th=[23987], 80.00th=[24773], 90.00th=[33817], 95.00th=[35914], 00:17:19.041 | 99.00th=[37487], 99.50th=[44303], 99.90th=[49021], 99.95th=[49546], 00:17:19.041 | 99.99th=[58983] 00:17:19.041 bw ( KiB/s): min= 2000, max= 4032, per=14.24%, avg=2927.20, stdev=570.56, samples=20 00:17:19.041 iops : min= 500, max= 1008, avg=731.80, stdev=142.64, samples=20 00:17:19.041 lat (usec) : 1000=0.03% 00:17:19.041 lat (msec) : 2=0.03%, 4=0.40%, 10=2.44%, 20=30.58%, 50=66.50% 00:17:19.041 lat (msec) : 100=0.03% 00:17:19.041 cpu : usr=33.00%, sys=2.35%, ctx=931, majf=0, minf=9 00:17:19.041 IO depths : 1=1.3%, 2=6.5%, 4=21.4%, 8=59.1%, 16=11.7%, 32=0.0%, >=64=0.0% 00:17:19.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.041 complete : 0=0.0%, 4=93.5%, 8=1.4%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.041 issued rwts: total=7334,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.041 latency : target=0, window=0, percentile=100.00%, depth=16 00:17:19.041 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=79699: Wed Apr 17 15:40:18 2024 00:17:19.041 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:17:19.041 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:17:19.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.041 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.041 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.041 latency : target=0, window=0, percentile=100.00%, depth=16 00:17:19.041 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=79700: Wed Apr 17 15:40:18 2024 00:17:19.041 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:17:19.041 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:17:19.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.041 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.041 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.041 latency : target=0, window=0, percentile=100.00%, depth=16 00:17:19.041 00:17:19.041 Run status group 0 (all jobs): 00:17:19.041 READ: bw=20.1MiB/s (21.0MB/s), 2898KiB/s-3082KiB/s (2967kB/s-3156kB/s), io=201MiB (211MB), run=3372-10022msec 00:17:19.041 15:40:19 -- common/autotest_common.sh@1338 -- # trap - ERR 00:17:19.041 15:40:19 -- common/autotest_common.sh@1338 -- # print_backtrace 00:17:19.041 15:40:19 -- common/autotest_common.sh@1139 -- # [[ ehxBET =~ e ]] 00:17:19.041 15:40:19 -- common/autotest_common.sh@1141 -- # args=('/dev/fd/61' '/dev/fd/62' '--spdk_json_conf' '--ioengine=spdk_bdev' '/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' '/dev/fd/61' '/dev/fd/62' '--spdk_json_conf' '--ioengine=spdk_bdev' '/dev/fd/62' 'fio_dif_rand_params' 'fio_dif_rand_params' '--iso' '--transport=tcp') 00:17:19.041 15:40:19 -- common/autotest_common.sh@1141 -- # local args 00:17:19.041 15:40:19 -- common/autotest_common.sh@1143 -- # xtrace_disable 00:17:19.041 15:40:19 -- common/autotest_common.sh@10 -- # set +x 00:17:19.041 ========== Backtrace start: ========== 00:17:19.041 00:17:19.041 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1338 -> fio_plugin(["/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev"],["--ioengine=spdk_bdev"],["--spdk_json_conf"],["/dev/fd/62"],["/dev/fd/61"]) 00:17:19.041 ... 00:17:19.041 1333 break 00:17:19.041 1334 fi 00:17:19.041 1335 done 00:17:19.041 1336 00:17:19.041 1337 # Preload the sanitizer library to fio if fio_plugin was compiled with it 00:17:19.041 1338 LD_PRELOAD="$asan_lib $plugin" "$fio_dir"/fio "$@" 00:17:19.041 1339 } 00:17:19.041 1340 00:17:19.041 1341 function fio_bdev() { 00:17:19.041 1342 fio_plugin "$rootdir/build/fio/spdk_bdev" "$@" 00:17:19.041 1343 } 00:17:19.041 ... 00:17:19.041 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1342 -> fio_bdev(["--ioengine=spdk_bdev"],["--spdk_json_conf"],["/dev/fd/62"],["/dev/fd/61"]) 00:17:19.041 ... 00:17:19.041 1337 # Preload the sanitizer library to fio if fio_plugin was compiled with it 00:17:19.041 1338 LD_PRELOAD="$asan_lib $plugin" "$fio_dir"/fio "$@" 00:17:19.041 1339 } 00:17:19.041 1340 00:17:19.041 1341 function fio_bdev() { 00:17:19.041 1342 fio_plugin "$rootdir/build/fio/spdk_bdev" "$@" 00:17:19.041 1343 } 00:17:19.041 1344 00:17:19.041 1345 function fio_nvme() { 00:17:19.041 1346 fio_plugin "$rootdir/build/fio/spdk_nvme" "$@" 00:17:19.041 1347 } 00:17:19.041 ... 00:17:19.041 in /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh:82 -> fio(["/dev/fd/62"]) 00:17:19.041 ... 00:17:19.041 77 FIO 00:17:19.041 78 done 00:17:19.041 79 } 00:17:19.041 80 00:17:19.041 81 fio() { 00:17:19.041 => 82 fio_bdev --ioengine=spdk_bdev --spdk_json_conf "$@" <(gen_fio_conf) 00:17:19.041 83 } 00:17:19.041 84 00:17:19.041 85 fio_dif_1() { 00:17:19.041 86 create_subsystems 0 00:17:19.041 87 fio <(create_json_sub_conf 0) 00:17:19.041 ... 00:17:19.041 in /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh:112 -> fio_dif_rand_params([]) 00:17:19.041 ... 00:17:19.041 107 destroy_subsystems 0 00:17:19.041 108 00:17:19.041 109 NULL_DIF=2 bs=4k numjobs=8 iodepth=16 runtime="" files=2 00:17:19.041 110 00:17:19.041 111 create_subsystems 0 1 2 00:17:19.041 => 112 fio <(create_json_sub_conf 0 1 2) 00:17:19.041 113 destroy_subsystems 0 1 2 00:17:19.041 114 00:17:19.041 115 NULL_DIF=1 bs=8k,16k,128k numjobs=2 iodepth=8 runtime=5 files=1 00:17:19.041 116 00:17:19.041 117 create_subsystems 0 1 00:17:19.041 ... 00:17:19.041 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1111 -> run_test(["fio_dif_rand_params"],["fio_dif_rand_params"]) 00:17:19.041 ... 00:17:19.041 1106 timing_enter $test_name 00:17:19.041 1107 echo "************************************" 00:17:19.041 1108 echo "START TEST $test_name" 00:17:19.041 1109 echo "************************************" 00:17:19.041 1110 xtrace_restore 00:17:19.041 1111 time "$@" 00:17:19.041 1112 xtrace_disable 00:17:19.041 1113 echo "************************************" 00:17:19.041 1114 echo "END TEST $test_name" 00:17:19.041 1115 echo "************************************" 00:17:19.041 1116 timing_exit $test_name 00:17:19.041 ... 00:17:19.041 in /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh:143 -> main(["--transport=tcp"],["--iso"]) 00:17:19.041 ... 00:17:19.041 138 00:17:19.041 139 create_transport 00:17:19.041 140 00:17:19.041 141 run_test "fio_dif_1_default" fio_dif_1 00:17:19.041 142 run_test "fio_dif_1_multi_subsystems" fio_dif_1_multi_subsystems 00:17:19.041 => 143 run_test "fio_dif_rand_params" fio_dif_rand_params 00:17:19.041 144 run_test "fio_dif_digest" fio_dif_digest 00:17:19.042 145 00:17:19.042 146 trap - SIGINT SIGTERM EXIT 00:17:19.042 147 nvmftestfini 00:17:19.042 ... 00:17:19.042 00:17:19.042 ========== Backtrace end ========== 00:17:19.042 15:40:19 -- common/autotest_common.sh@1180 -- # return 0 00:17:19.042 00:17:19.042 real 0m17.920s 00:17:19.042 user 1m50.308s 00:17:19.042 sys 0m3.988s 00:17:19.042 15:40:19 -- common/autotest_common.sh@1 -- # process_shm --id 0 00:17:19.042 15:40:19 -- common/autotest_common.sh@794 -- # type=--id 00:17:19.042 15:40:19 -- common/autotest_common.sh@795 -- # id=0 00:17:19.042 15:40:19 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:17:19.042 15:40:19 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:19.042 15:40:19 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:17:19.042 15:40:19 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:17:19.042 15:40:19 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:17:19.042 15:40:19 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:19.042 nvmf_trace.0 00:17:19.042 15:40:19 -- common/autotest_common.sh@809 -- # return 0 00:17:19.042 15:40:19 -- common/autotest_common.sh@1 -- # nvmftestfini 00:17:19.042 15:40:19 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:19.042 15:40:19 -- nvmf/common.sh@117 -- # sync 00:17:19.042 15:40:19 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:19.042 15:40:19 -- nvmf/common.sh@120 -- # set +e 00:17:19.042 15:40:19 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:19.042 15:40:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:19.042 rmmod nvme_tcp 00:17:19.042 rmmod nvme_fabrics 00:17:19.042 rmmod nvme_keyring 00:17:19.042 15:40:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:19.042 15:40:19 -- nvmf/common.sh@124 -- # set -e 00:17:19.042 15:40:19 -- nvmf/common.sh@125 -- # return 0 00:17:19.042 15:40:19 -- nvmf/common.sh@478 -- # '[' -n 79172 ']' 00:17:19.042 15:40:19 -- nvmf/common.sh@479 -- # killprocess 79172 00:17:19.042 15:40:19 -- common/autotest_common.sh@936 -- # '[' -z 79172 ']' 00:17:19.042 15:40:19 -- common/autotest_common.sh@940 -- # kill -0 79172 00:17:19.042 15:40:19 -- common/autotest_common.sh@941 -- # uname 00:17:19.042 15:40:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:19.042 15:40:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79172 00:17:19.042 15:40:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:19.042 15:40:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:19.042 15:40:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79172' 00:17:19.042 killing process with pid 79172 00:17:19.042 15:40:19 -- common/autotest_common.sh@955 -- # kill 79172 00:17:19.042 15:40:19 -- common/autotest_common.sh@960 -- # wait 79172 00:17:19.042 15:40:19 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:17:19.042 15:40:19 -- nvmf/common.sh@482 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:19.042 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:19.042 Waiting for block devices as requested 00:17:19.042 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:19.042 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:19.042 15:40:20 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:19.042 15:40:20 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:19.042 15:40:20 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:19.042 15:40:20 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:19.042 15:40:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.042 15:40:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:17:19.042 15:40:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.042 15:40:20 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:19.042 15:40:20 -- common/autotest_common.sh@1111 -- # trap - ERR 00:17:19.042 15:40:20 -- common/autotest_common.sh@1111 -- # print_backtrace 00:17:19.042 15:40:20 -- common/autotest_common.sh@1139 -- # [[ ehxBET =~ e ]] 00:17:19.042 15:40:20 -- common/autotest_common.sh@1141 -- # args=('/home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh' 'nvmf_dif' '/home/vagrant/spdk_repo/autorun-spdk.conf') 00:17:19.042 15:40:20 -- common/autotest_common.sh@1141 -- # local args 00:17:19.042 15:40:20 -- common/autotest_common.sh@1143 -- # xtrace_disable 00:17:19.042 15:40:20 -- common/autotest_common.sh@10 -- # set +x 00:17:19.042 ========== Backtrace start: ========== 00:17:19.042 00:17:19.042 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1111 -> run_test(["nvmf_dif"],["/home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh"]) 00:17:19.042 ... 00:17:19.042 1106 timing_enter $test_name 00:17:19.042 1107 echo "************************************" 00:17:19.042 1108 echo "START TEST $test_name" 00:17:19.042 1109 echo "************************************" 00:17:19.042 1110 xtrace_restore 00:17:19.042 1111 time "$@" 00:17:19.042 1112 xtrace_disable 00:17:19.042 1113 echo "************************************" 00:17:19.042 1114 echo "END TEST $test_name" 00:17:19.042 1115 echo "************************************" 00:17:19.042 1116 timing_exit $test_name 00:17:19.042 ... 00:17:19.042 in /home/vagrant/spdk_repo/spdk/autotest.sh:289 -> main(["/home/vagrant/spdk_repo/autorun-spdk.conf"]) 00:17:19.042 ... 00:17:19.042 284 run_test "nvmf_tcp" $rootdir/test/nvmf/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:17:19.042 285 if [[ $SPDK_TEST_URING -eq 0 ]]; then 00:17:19.042 286 run_test "spdkcli_nvmf_tcp" $rootdir/test/spdkcli/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:17:19.042 287 run_test "nvmf_identify_passthru" $rootdir/test/nvmf/target/identify_passthru.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:17:19.042 288 fi 00:17:19.042 => 289 run_test "nvmf_dif" $rootdir/test/nvmf/target/dif.sh 00:17:19.042 290 run_test "nvmf_abort_qd_sizes" $rootdir/test/nvmf/target/abort_qd_sizes.sh 00:17:19.042 291 # The keyring tests utilize NVMe/TLS 00:17:19.042 292 run_test "keyring_file" "$rootdir/test/keyring/file.sh" 00:17:19.042 293 if [[ "$CONFIG_HAVE_KEYUTILS" == y ]]; then 00:17:19.042 294 run_test "keyring_linux" "$rootdir/test/keyring/linux.sh" 00:17:19.042 ... 00:17:19.042 00:17:19.042 ========== Backtrace end ========== 00:17:19.042 15:40:20 -- common/autotest_common.sh@1180 -- # return 0 00:17:19.042 00:17:19.042 real 0m43.721s 00:17:19.042 user 2m52.023s 00:17:19.042 sys 0m11.675s 00:17:19.042 15:40:20 -- common/autotest_common.sh@1 -- # autotest_cleanup 00:17:19.042 15:40:20 -- common/autotest_common.sh@1378 -- # local autotest_es=18 00:17:19.042 15:40:20 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:17:19.042 15:40:20 -- common/autotest_common.sh@10 -- # set +x 00:17:31.295 INFO: APP EXITING 00:17:31.295 INFO: killing all VMs 00:17:31.295 INFO: killing vhost app 00:17:31.295 INFO: EXIT DONE 00:17:31.295 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:31.295 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:17:31.295 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:17:31.862 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:31.862 Cleaning 00:17:31.862 Removing: /var/run/dpdk/spdk0/config 00:17:32.121 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:17:32.121 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:17:32.121 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:17:32.121 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:17:32.121 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:17:32.121 Removing: /var/run/dpdk/spdk0/hugepage_info 00:17:32.121 Removing: /var/run/dpdk/spdk1/config 00:17:32.121 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:17:32.121 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:17:32.121 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:17:32.121 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:17:32.121 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:17:32.121 Removing: /var/run/dpdk/spdk1/hugepage_info 00:17:32.121 Removing: /var/run/dpdk/spdk2/config 00:17:32.121 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:17:32.121 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:17:32.121 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:17:32.121 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:17:32.121 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:17:32.121 Removing: /var/run/dpdk/spdk2/hugepage_info 00:17:32.121 Removing: /var/run/dpdk/spdk3/config 00:17:32.121 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:17:32.121 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:17:32.121 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:17:32.121 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:17:32.121 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:17:32.121 Removing: /var/run/dpdk/spdk3/hugepage_info 00:17:32.121 Removing: /var/run/dpdk/spdk4/config 00:17:32.121 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:17:32.121 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:17:32.121 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:17:32.121 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:17:32.121 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:17:32.121 Removing: /var/run/dpdk/spdk4/hugepage_info 00:17:32.121 Removing: /dev/shm/nvmf_trace.0 00:17:32.121 Removing: /dev/shm/spdk_tgt_trace.pid58334 00:17:32.121 Removing: /var/run/dpdk/spdk0 00:17:32.121 Removing: /var/run/dpdk/spdk1 00:17:32.121 Removing: /var/run/dpdk/spdk2 00:17:32.121 Removing: /var/run/dpdk/spdk3 00:17:32.121 Removing: /var/run/dpdk/spdk4 00:17:32.121 Removing: /var/run/dpdk/spdk_pid58167 00:17:32.121 Removing: /var/run/dpdk/spdk_pid58334 00:17:32.121 Removing: /var/run/dpdk/spdk_pid58605 00:17:32.121 Removing: /var/run/dpdk/spdk_pid58801 00:17:32.121 Removing: /var/run/dpdk/spdk_pid58955 00:17:32.121 Removing: /var/run/dpdk/spdk_pid59031 00:17:32.121 Removing: /var/run/dpdk/spdk_pid59112 00:17:32.121 Removing: /var/run/dpdk/spdk_pid59208 00:17:32.121 Removing: /var/run/dpdk/spdk_pid59294 00:17:32.121 Removing: /var/run/dpdk/spdk_pid59331 00:17:32.121 Removing: /var/run/dpdk/spdk_pid59376 00:17:32.121 Removing: /var/run/dpdk/spdk_pid59443 00:17:32.121 Removing: /var/run/dpdk/spdk_pid59551 00:17:32.121 Removing: /var/run/dpdk/spdk_pid60006 00:17:32.121 Removing: /var/run/dpdk/spdk_pid60062 00:17:32.121 Removing: /var/run/dpdk/spdk_pid60117 00:17:32.121 Removing: /var/run/dpdk/spdk_pid60133 00:17:32.121 Removing: /var/run/dpdk/spdk_pid60215 00:17:32.121 Removing: /var/run/dpdk/spdk_pid60231 00:17:32.121 Removing: /var/run/dpdk/spdk_pid60313 00:17:32.121 Removing: /var/run/dpdk/spdk_pid60329 00:17:32.121 Removing: /var/run/dpdk/spdk_pid60384 00:17:32.121 Removing: /var/run/dpdk/spdk_pid60402 00:17:32.121 Removing: /var/run/dpdk/spdk_pid60452 00:17:32.121 Removing: /var/run/dpdk/spdk_pid60470 00:17:32.121 Removing: /var/run/dpdk/spdk_pid60612 00:17:32.121 Removing: /var/run/dpdk/spdk_pid60653 00:17:32.121 Removing: /var/run/dpdk/spdk_pid60734 00:17:32.121 Removing: /var/run/dpdk/spdk_pid60799 00:17:32.121 Removing: /var/run/dpdk/spdk_pid60832 00:17:32.121 Removing: /var/run/dpdk/spdk_pid60909 00:17:32.121 Removing: /var/run/dpdk/spdk_pid60948 00:17:32.121 Removing: /var/run/dpdk/spdk_pid60986 00:17:32.121 Removing: /var/run/dpdk/spdk_pid61030 00:17:32.121 Removing: /var/run/dpdk/spdk_pid61071 00:17:32.121 Removing: /var/run/dpdk/spdk_pid61116 00:17:32.121 Removing: /var/run/dpdk/spdk_pid61154 00:17:32.121 Removing: /var/run/dpdk/spdk_pid61193 00:17:32.121 Removing: /var/run/dpdk/spdk_pid61237 00:17:32.121 Removing: /var/run/dpdk/spdk_pid61281 00:17:32.121 Removing: /var/run/dpdk/spdk_pid61314 00:17:32.121 Removing: /var/run/dpdk/spdk_pid61358 00:17:32.121 Removing: /var/run/dpdk/spdk_pid61402 00:17:32.121 Removing: /var/run/dpdk/spdk_pid61445 00:17:32.121 Removing: /var/run/dpdk/spdk_pid61481 00:17:32.121 Removing: /var/run/dpdk/spdk_pid61528 00:17:32.121 Removing: /var/run/dpdk/spdk_pid61572 00:17:32.380 Removing: /var/run/dpdk/spdk_pid61614 00:17:32.380 Removing: /var/run/dpdk/spdk_pid61655 00:17:32.380 Removing: /var/run/dpdk/spdk_pid61699 00:17:32.380 Removing: /var/run/dpdk/spdk_pid61744 00:17:32.380 Removing: /var/run/dpdk/spdk_pid61820 00:17:32.380 Removing: /var/run/dpdk/spdk_pid61922 00:17:32.380 Removing: /var/run/dpdk/spdk_pid62250 00:17:32.380 Removing: /var/run/dpdk/spdk_pid62271 00:17:32.380 Removing: /var/run/dpdk/spdk_pid62313 00:17:32.380 Removing: /var/run/dpdk/spdk_pid62332 00:17:32.380 Removing: /var/run/dpdk/spdk_pid62353 00:17:32.380 Removing: /var/run/dpdk/spdk_pid62372 00:17:32.380 Removing: /var/run/dpdk/spdk_pid62391 00:17:32.380 Removing: /var/run/dpdk/spdk_pid62412 00:17:32.380 Removing: /var/run/dpdk/spdk_pid62431 00:17:32.380 Removing: /var/run/dpdk/spdk_pid62450 00:17:32.380 Removing: /var/run/dpdk/spdk_pid62471 00:17:32.380 Removing: /var/run/dpdk/spdk_pid62490 00:17:32.380 Removing: /var/run/dpdk/spdk_pid62509 00:17:32.380 Removing: /var/run/dpdk/spdk_pid62530 00:17:32.380 Removing: /var/run/dpdk/spdk_pid62549 00:17:32.380 Removing: /var/run/dpdk/spdk_pid62568 00:17:32.380 Removing: /var/run/dpdk/spdk_pid62589 00:17:32.380 Removing: /var/run/dpdk/spdk_pid62608 00:17:32.380 Removing: /var/run/dpdk/spdk_pid62627 00:17:32.380 Removing: /var/run/dpdk/spdk_pid62648 00:17:32.380 Removing: /var/run/dpdk/spdk_pid62688 00:17:32.380 Removing: /var/run/dpdk/spdk_pid62702 00:17:32.380 Removing: /var/run/dpdk/spdk_pid62737 00:17:32.380 Removing: /var/run/dpdk/spdk_pid62810 00:17:32.380 Removing: /var/run/dpdk/spdk_pid62848 00:17:32.380 Removing: /var/run/dpdk/spdk_pid62863 00:17:32.380 Removing: /var/run/dpdk/spdk_pid62895 00:17:32.380 Removing: /var/run/dpdk/spdk_pid62910 00:17:32.380 Removing: /var/run/dpdk/spdk_pid62918 00:17:32.380 Removing: /var/run/dpdk/spdk_pid62975 00:17:32.380 Removing: /var/run/dpdk/spdk_pid62994 00:17:32.380 Removing: /var/run/dpdk/spdk_pid63032 00:17:32.380 Removing: /var/run/dpdk/spdk_pid63041 00:17:32.380 Removing: /var/run/dpdk/spdk_pid63051 00:17:32.380 Removing: /var/run/dpdk/spdk_pid63066 00:17:32.380 Removing: /var/run/dpdk/spdk_pid63081 00:17:32.380 Removing: /var/run/dpdk/spdk_pid63085 00:17:32.380 Removing: /var/run/dpdk/spdk_pid63100 00:17:32.380 Removing: /var/run/dpdk/spdk_pid63115 00:17:32.380 Removing: /var/run/dpdk/spdk_pid63142 00:17:32.381 Removing: /var/run/dpdk/spdk_pid63181 00:17:32.381 Removing: /var/run/dpdk/spdk_pid63196 00:17:32.381 Removing: /var/run/dpdk/spdk_pid63223 00:17:32.381 Removing: /var/run/dpdk/spdk_pid63238 00:17:32.381 Removing: /var/run/dpdk/spdk_pid63251 00:17:32.381 Removing: /var/run/dpdk/spdk_pid63295 00:17:32.381 Removing: /var/run/dpdk/spdk_pid63307 00:17:32.381 Removing: /var/run/dpdk/spdk_pid63343 00:17:32.381 Removing: /var/run/dpdk/spdk_pid63351 00:17:32.381 Removing: /var/run/dpdk/spdk_pid63358 00:17:32.381 Removing: /var/run/dpdk/spdk_pid63371 00:17:32.381 Removing: /var/run/dpdk/spdk_pid63373 00:17:32.381 Removing: /var/run/dpdk/spdk_pid63386 00:17:32.381 Removing: /var/run/dpdk/spdk_pid63394 00:17:32.381 Removing: /var/run/dpdk/spdk_pid63401 00:17:32.381 Removing: /var/run/dpdk/spdk_pid63484 00:17:32.381 Removing: /var/run/dpdk/spdk_pid63543 00:17:32.381 Removing: /var/run/dpdk/spdk_pid63662 00:17:32.381 Removing: /var/run/dpdk/spdk_pid63707 00:17:32.381 Removing: /var/run/dpdk/spdk_pid63754 00:17:32.381 Removing: /var/run/dpdk/spdk_pid63774 00:17:32.381 Removing: /var/run/dpdk/spdk_pid63798 00:17:32.381 Removing: /var/run/dpdk/spdk_pid63818 00:17:32.381 Removing: /var/run/dpdk/spdk_pid63855 00:17:32.381 Removing: /var/run/dpdk/spdk_pid63876 00:17:32.381 Removing: /var/run/dpdk/spdk_pid63958 00:17:32.381 Removing: /var/run/dpdk/spdk_pid63985 00:17:32.381 Removing: /var/run/dpdk/spdk_pid64035 00:17:32.381 Removing: /var/run/dpdk/spdk_pid64101 00:17:32.381 Removing: /var/run/dpdk/spdk_pid64163 00:17:32.381 Removing: /var/run/dpdk/spdk_pid64202 00:17:32.381 Removing: /var/run/dpdk/spdk_pid64302 00:17:32.381 Removing: /var/run/dpdk/spdk_pid64354 00:17:32.381 Removing: /var/run/dpdk/spdk_pid64396 00:17:32.381 Removing: /var/run/dpdk/spdk_pid64662 00:17:32.381 Removing: /var/run/dpdk/spdk_pid64775 00:17:32.381 Removing: /var/run/dpdk/spdk_pid64813 00:17:32.381 Removing: /var/run/dpdk/spdk_pid65152 00:17:32.381 Removing: /var/run/dpdk/spdk_pid65186 00:17:32.381 Removing: /var/run/dpdk/spdk_pid65507 00:17:32.640 Removing: /var/run/dpdk/spdk_pid65921 00:17:32.640 Removing: /var/run/dpdk/spdk_pid66189 00:17:32.640 Removing: /var/run/dpdk/spdk_pid66986 00:17:32.640 Removing: /var/run/dpdk/spdk_pid67817 00:17:32.640 Removing: /var/run/dpdk/spdk_pid67934 00:17:32.640 Removing: /var/run/dpdk/spdk_pid68004 00:17:32.640 Removing: /var/run/dpdk/spdk_pid69275 00:17:32.640 Removing: /var/run/dpdk/spdk_pid69498 00:17:32.640 Removing: /var/run/dpdk/spdk_pid69806 00:17:32.640 Removing: /var/run/dpdk/spdk_pid69915 00:17:32.640 Removing: /var/run/dpdk/spdk_pid70054 00:17:32.640 Removing: /var/run/dpdk/spdk_pid70080 00:17:32.640 Removing: /var/run/dpdk/spdk_pid70111 00:17:32.640 Removing: /var/run/dpdk/spdk_pid70133 00:17:32.640 Removing: /var/run/dpdk/spdk_pid70231 00:17:32.640 Removing: /var/run/dpdk/spdk_pid70365 00:17:32.640 Removing: /var/run/dpdk/spdk_pid70515 00:17:32.640 Removing: /var/run/dpdk/spdk_pid70601 00:17:32.640 Removing: /var/run/dpdk/spdk_pid70794 00:17:32.640 Removing: /var/run/dpdk/spdk_pid70883 00:17:32.640 Removing: /var/run/dpdk/spdk_pid70976 00:17:32.640 Removing: /var/run/dpdk/spdk_pid71282 00:17:32.640 Removing: /var/run/dpdk/spdk_pid71671 00:17:32.640 Removing: /var/run/dpdk/spdk_pid71673 00:17:32.640 Removing: /var/run/dpdk/spdk_pid71953 00:17:32.640 Removing: /var/run/dpdk/spdk_pid71968 00:17:32.640 Removing: /var/run/dpdk/spdk_pid71992 00:17:32.640 Removing: /var/run/dpdk/spdk_pid72017 00:17:32.640 Removing: /var/run/dpdk/spdk_pid72022 00:17:32.640 Removing: /var/run/dpdk/spdk_pid72315 00:17:32.640 Removing: /var/run/dpdk/spdk_pid72358 00:17:32.640 Removing: /var/run/dpdk/spdk_pid72650 00:17:32.640 Removing: /var/run/dpdk/spdk_pid72846 00:17:32.640 Removing: /var/run/dpdk/spdk_pid73241 00:17:32.640 Removing: /var/run/dpdk/spdk_pid73733 00:17:32.640 Removing: /var/run/dpdk/spdk_pid74327 00:17:32.640 Removing: /var/run/dpdk/spdk_pid74329 00:17:32.640 Removing: /var/run/dpdk/spdk_pid76288 00:17:32.640 Removing: /var/run/dpdk/spdk_pid76348 00:17:32.640 Removing: /var/run/dpdk/spdk_pid76408 00:17:32.640 Removing: /var/run/dpdk/spdk_pid76474 00:17:32.640 Removing: /var/run/dpdk/spdk_pid76599 00:17:32.640 Removing: /var/run/dpdk/spdk_pid76665 00:17:32.640 Removing: /var/run/dpdk/spdk_pid76725 00:17:32.640 Removing: /var/run/dpdk/spdk_pid76784 00:17:32.640 Removing: /var/run/dpdk/spdk_pid77108 00:17:32.640 Removing: /var/run/dpdk/spdk_pid78289 00:17:32.640 Removing: /var/run/dpdk/spdk_pid78434 00:17:32.640 Removing: /var/run/dpdk/spdk_pid78681 00:17:32.640 Removing: /var/run/dpdk/spdk_pid79240 00:17:32.640 Removing: /var/run/dpdk/spdk_pid79403 00:17:32.640 Removing: /var/run/dpdk/spdk_pid79565 00:17:32.640 Removing: /var/run/dpdk/spdk_pid79666 00:17:32.640 Clean 00:17:39.233 15:40:39 -- common/autotest_common.sh@1437 -- # return 18 00:17:39.233 15:40:39 -- common/autotest_common.sh@1 -- # : 00:17:39.233 15:40:39 -- common/autotest_common.sh@1 -- # exit 1 00:17:39.244 [Pipeline] } 00:17:39.265 [Pipeline] // timeout 00:17:39.271 [Pipeline] } 00:17:39.291 [Pipeline] // stage 00:17:39.298 [Pipeline] } 00:17:39.301 ERROR: script returned exit code 1 00:17:39.318 [Pipeline] // catchError 00:17:39.326 [Pipeline] stage 00:17:39.328 [Pipeline] { (Stop VM) 00:17:39.341 [Pipeline] sh 00:17:39.617 + vagrant halt 00:17:43.806 ==> default: Halting domain... 00:17:50.378 [Pipeline] sh 00:17:50.655 + vagrant destroy -f 00:17:54.843 ==> default: Removing domain... 00:17:54.853 [Pipeline] sh 00:17:55.131 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/output 00:17:55.140 [Pipeline] } 00:17:55.156 [Pipeline] // stage 00:17:55.161 [Pipeline] } 00:17:55.176 [Pipeline] // dir 00:17:55.181 [Pipeline] } 00:17:55.196 [Pipeline] // wrap 00:17:55.202 [Pipeline] } 00:17:55.216 [Pipeline] // catchError 00:17:55.224 [Pipeline] stage 00:17:55.226 [Pipeline] { (Epilogue) 00:17:55.240 [Pipeline] sh 00:17:55.518 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:17:57.468 [Pipeline] catchError 00:17:57.470 [Pipeline] { 00:17:57.484 [Pipeline] sh 00:17:57.762 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:17:57.762 Artifacts sizes are good 00:17:57.770 [Pipeline] } 00:17:57.785 [Pipeline] // catchError 00:17:57.794 [Pipeline] archiveArtifacts 00:17:57.800 Archiving artifacts 00:17:58.020 [Pipeline] cleanWs 00:17:58.031 [WS-CLEANUP] Deleting project workspace... 00:17:58.031 [WS-CLEANUP] Deferred wipeout is used... 00:17:58.037 [WS-CLEANUP] done 00:17:58.039 [Pipeline] } 00:17:58.055 [Pipeline] // stage 00:17:58.060 [Pipeline] } 00:17:58.076 [Pipeline] // node 00:17:58.081 [Pipeline] End of Pipeline 00:17:58.122 Finished: FAILURE